00:00:00.001 Started by upstream project "spdk-dpdk-per-patch" build number 257 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.058 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.062 The recommended git tool is: git 00:00:00.062 using credential 00000000-0000-0000-0000-000000000002 00:00:00.066 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.099 Fetching changes from the remote Git repository 00:00:00.101 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.160 Using shallow fetch with depth 1 00:00:00.160 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.160 > git --version # timeout=10 00:00:00.220 > git --version # 'git version 2.39.2' 00:00:00.220 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.221 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.221 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:23.288 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:23.303 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:23.318 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:23.318 > git config core.sparsecheckout # timeout=10 00:00:23.331 > git read-tree -mu HEAD # timeout=10 00:00:23.349 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:23.368 Commit message: "inventory/dev: add missing long names" 00:00:23.369 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:23.452 [Pipeline] Start of Pipeline 00:00:23.466 [Pipeline] library 00:00:23.468 Loading library shm_lib@master 00:00:23.468 Library shm_lib@master is cached. Copying from home. 00:00:23.487 [Pipeline] node 00:00:38.489 Still waiting to schedule task 00:00:38.490 ‘FCP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘FCP12’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘GP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘GP13’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘GP15’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘GP16’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘GP18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘GP19’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘GP1’ is offline 00:00:38.490 ‘GP4’ is offline 00:00:38.490 ‘GP5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘ImageBuilder1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘Jenkins’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘ME1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘ME2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘PE5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘SM10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘SM11’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘SM1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘SM28’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘SM29’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘SM2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘SM30’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘SM31’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘SM32’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘SM33’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘SM34’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘SM35’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘SM6’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘SM7’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘SM8’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.490 ‘VM-host-PE1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘VM-host-PE2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘VM-host-PE3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘VM-host-PE4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘VM-host-SM18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘VM-host-WFP25’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘WCP2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘WCP4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘WFP12’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘WFP13’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘WFP15’ is offline 00:00:38.491 ‘WFP16’ is offline 00:00:38.491 ‘WFP17’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘WFP21’ is offline 00:00:38.491 ‘WFP24’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘WFP2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘WFP36’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘WFP45’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘WFP49’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘WFP4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘WFP63’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘WFP64’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.491 ‘WFP6’ is offline 00:00:38.492 ‘WFP9’ is offline 00:00:38.492 ‘agt-_autotest_20015-13645’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.492 ‘agt-_autotest_20016-13648’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.492 ‘agt-_autotest_20017-13647’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.492 ‘agt-_autotest_20018-13644’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.492 ‘agt-_autotest_22326-13646’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.492 ‘ipxe-staging’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.492 ‘prc_bsc_waikikibeach64’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.492 ‘spdk-pxe-01’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:38.492 ‘spdk-pxe-02’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:07:15.267 Running on CYP10 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:07:15.268 [Pipeline] { 00:07:15.283 [Pipeline] catchError 00:07:15.285 [Pipeline] { 00:07:15.299 [Pipeline] wrap 00:07:15.309 [Pipeline] { 00:07:15.316 [Pipeline] stage 00:07:15.318 [Pipeline] { (Prologue) 00:07:15.489 [Pipeline] sh 00:07:16.478 + logger -p user.info -t JENKINS-CI 00:07:16.505 [Pipeline] echo 00:07:16.507 Node: CYP10 00:07:16.516 [Pipeline] sh 00:07:16.857 [Pipeline] setCustomBuildProperty 00:07:16.870 [Pipeline] echo 00:07:16.871 Cleanup processes 00:07:16.877 [Pipeline] sh 00:07:17.179 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:17.179 4440 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:17.196 [Pipeline] sh 00:07:17.496 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:17.496 ++ grep -v 'sudo pgrep' 00:07:17.496 ++ awk '{print $1}' 00:07:17.496 + sudo kill -9 00:07:17.496 + true 00:07:17.514 [Pipeline] cleanWs 00:07:17.524 [WS-CLEANUP] Deleting project workspace... 00:07:17.524 [WS-CLEANUP] Deferred wipeout is used... 00:07:17.538 [WS-CLEANUP] done 00:07:17.545 [Pipeline] setCustomBuildProperty 00:07:17.564 [Pipeline] sh 00:07:17.857 + sudo git config --global --replace-all safe.directory '*' 00:07:17.935 [Pipeline] nodesByLabel 00:07:17.937 Found a total of 1 nodes with the 'sorcerer' label 00:07:17.948 [Pipeline] httpRequest 00:07:18.205 HttpMethod: GET 00:07:18.205 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:07:19.138 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:07:19.419 Response Code: HTTP/1.1 200 OK 00:07:19.502 Success: Status code 200 is in the accepted range: 200,404 00:07:19.504 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:07:19.817 [Pipeline] sh 00:07:20.117 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:07:20.138 [Pipeline] httpRequest 00:07:20.145 HttpMethod: GET 00:07:20.146 URL: http://10.211.164.101/packages/spdk_cc94f303140837bf6a876bdb960d7af86788f2db.tar.gz 00:07:20.147 Sending request to url: http://10.211.164.101/packages/spdk_cc94f303140837bf6a876bdb960d7af86788f2db.tar.gz 00:07:20.151 Response Code: HTTP/1.1 200 OK 00:07:20.152 Success: Status code 200 is in the accepted range: 200,404 00:07:20.152 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_cc94f303140837bf6a876bdb960d7af86788f2db.tar.gz 00:07:23.278 [Pipeline] sh 00:07:23.574 + tar --no-same-owner -xf spdk_cc94f303140837bf6a876bdb960d7af86788f2db.tar.gz 00:07:26.150 [Pipeline] sh 00:07:26.447 + git -C spdk log --oneline -n5 00:07:26.448 cc94f3031 raid1: handle read errors 00:07:26.448 6e950b24b raid1: move function to avoid forward declaration later 00:07:26.448 d6aa653d2 raid1: remove common base bdev io completion function 00:07:26.448 b0b0889ef raid1: handle write errors 00:07:26.448 9820a9496 raid: add a default completion status to raid_bdev_io 00:07:26.463 [Pipeline] sh 00:07:26.756 + git -C spdk/dpdk fetch https://review.spdk.io/gerrit/spdk/dpdk refs/changes/84/23184/6 00:07:27.703 From https://review.spdk.io/gerrit/spdk/dpdk 00:07:27.703 * branch refs/changes/84/23184/6 -> FETCH_HEAD 00:07:27.716 [Pipeline] sh 00:07:28.004 + git -C spdk/dpdk checkout FETCH_HEAD 00:07:28.579 Previous HEAD position was db99adb13f kernel/freebsd: fix module build on FreeBSD 14 00:07:28.579 HEAD is now at d0dd711a38 crypto: increase RTE_CRYPTO_MAX_DEVS to accomodate QAT SYM and ASYM VFs 00:07:28.591 [Pipeline] } 00:07:28.605 [Pipeline] // stage 00:07:28.612 [Pipeline] stage 00:07:28.614 [Pipeline] { (Prepare) 00:07:28.629 [Pipeline] writeFile 00:07:28.651 [Pipeline] sh 00:07:28.949 + logger -p user.info -t JENKINS-CI 00:07:28.966 [Pipeline] sh 00:07:29.264 + logger -p user.info -t JENKINS-CI 00:07:29.280 [Pipeline] sh 00:07:29.571 + cat autorun-spdk.conf 00:07:29.571 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:29.571 SPDK_TEST_NVMF=1 00:07:29.571 SPDK_TEST_NVME_CLI=1 00:07:29.571 SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:29.571 SPDK_TEST_NVMF_NICS=e810 00:07:29.571 SPDK_TEST_VFIOUSER=1 00:07:29.571 SPDK_RUN_UBSAN=1 00:07:29.571 NET_TYPE=phy 00:07:29.581 RUN_NIGHTLY= 00:07:29.585 [Pipeline] readFile 00:07:29.624 [Pipeline] withEnv 00:07:29.626 [Pipeline] { 00:07:29.639 [Pipeline] sh 00:07:29.931 + set -ex 00:07:29.931 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:07:29.931 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:29.931 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:29.931 ++ SPDK_TEST_NVMF=1 00:07:29.931 ++ SPDK_TEST_NVME_CLI=1 00:07:29.931 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:29.931 ++ SPDK_TEST_NVMF_NICS=e810 00:07:29.932 ++ SPDK_TEST_VFIOUSER=1 00:07:29.932 ++ SPDK_RUN_UBSAN=1 00:07:29.932 ++ NET_TYPE=phy 00:07:29.932 ++ RUN_NIGHTLY= 00:07:29.932 + case $SPDK_TEST_NVMF_NICS in 00:07:29.932 + DRIVERS=ice 00:07:29.932 + [[ tcp == \r\d\m\a ]] 00:07:29.932 + [[ -n ice ]] 00:07:29.932 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:07:29.932 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:07:36.539 rmmod: ERROR: Module irdma is not currently loaded 00:07:36.539 rmmod: ERROR: Module i40iw is not currently loaded 00:07:36.539 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:07:36.539 + true 00:07:36.539 + for D in $DRIVERS 00:07:36.539 + sudo modprobe ice 00:07:36.539 + exit 0 00:07:36.551 [Pipeline] } 00:07:36.568 [Pipeline] // withEnv 00:07:36.574 [Pipeline] } 00:07:36.589 [Pipeline] // stage 00:07:36.598 [Pipeline] catchError 00:07:36.600 [Pipeline] { 00:07:36.613 [Pipeline] timeout 00:07:36.614 Timeout set to expire in 40 min 00:07:36.615 [Pipeline] { 00:07:36.630 [Pipeline] stage 00:07:36.631 [Pipeline] { (Tests) 00:07:36.648 [Pipeline] sh 00:07:36.941 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:07:36.941 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:07:36.941 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:07:36.941 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:07:36.941 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:36.941 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:07:36.941 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:07:36.941 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:07:36.941 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:07:36.941 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:07:36.941 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:07:36.941 + source /etc/os-release 00:07:36.941 ++ NAME='Fedora Linux' 00:07:36.941 ++ VERSION='38 (Cloud Edition)' 00:07:36.941 ++ ID=fedora 00:07:36.941 ++ VERSION_ID=38 00:07:36.941 ++ VERSION_CODENAME= 00:07:36.941 ++ PLATFORM_ID=platform:f38 00:07:36.941 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:07:36.941 ++ ANSI_COLOR='0;38;2;60;110;180' 00:07:36.941 ++ LOGO=fedora-logo-icon 00:07:36.941 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:07:36.941 ++ HOME_URL=https://fedoraproject.org/ 00:07:36.941 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:07:36.941 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:07:36.941 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:07:36.941 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:07:36.941 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:07:36.941 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:07:36.941 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:07:36.941 ++ SUPPORT_END=2024-05-14 00:07:36.941 ++ VARIANT='Cloud Edition' 00:07:36.941 ++ VARIANT_ID=cloud 00:07:36.941 + uname -a 00:07:36.941 Linux spdk-cyp-10 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:07:36.941 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:40.256 Hugepages 00:07:40.256 node hugesize free / total 00:07:40.256 node0 1048576kB 0 / 0 00:07:40.256 node0 2048kB 0 / 0 00:07:40.256 node1 1048576kB 0 / 0 00:07:40.256 node1 2048kB 0 / 0 00:07:40.256 00:07:40.256 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:40.256 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:07:40.256 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:07:40.256 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:07:40.256 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:07:40.256 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:07:40.256 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:07:40.256 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:07:40.256 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:07:40.256 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:07:40.256 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:07:40.256 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:07:40.256 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:07:40.256 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:07:40.256 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:07:40.256 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:07:40.256 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:07:40.256 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:07:40.256 + rm -f /tmp/spdk-ld-path 00:07:40.256 + source autorun-spdk.conf 00:07:40.256 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:40.256 ++ SPDK_TEST_NVMF=1 00:07:40.256 ++ SPDK_TEST_NVME_CLI=1 00:07:40.256 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:40.256 ++ SPDK_TEST_NVMF_NICS=e810 00:07:40.256 ++ SPDK_TEST_VFIOUSER=1 00:07:40.256 ++ SPDK_RUN_UBSAN=1 00:07:40.256 ++ NET_TYPE=phy 00:07:40.256 ++ RUN_NIGHTLY= 00:07:40.256 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:07:40.256 + [[ -n '' ]] 00:07:40.256 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:40.256 + for M in /var/spdk/build-*-manifest.txt 00:07:40.256 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:07:40.256 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:07:40.256 + for M in /var/spdk/build-*-manifest.txt 00:07:40.256 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:07:40.256 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:07:40.256 ++ uname 00:07:40.256 + [[ Linux == \L\i\n\u\x ]] 00:07:40.256 + sudo dmesg -T 00:07:40.256 + sudo dmesg --clear 00:07:40.256 + dmesg_pid=5512 00:07:40.256 + [[ Fedora Linux == FreeBSD ]] 00:07:40.256 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:40.256 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:40.256 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:07:40.256 + sudo dmesg -Tw 00:07:40.256 + [[ -x /usr/src/fio-static/fio ]] 00:07:40.256 + export FIO_BIN=/usr/src/fio-static/fio 00:07:40.256 + FIO_BIN=/usr/src/fio-static/fio 00:07:40.256 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:07:40.256 + [[ ! -v VFIO_QEMU_BIN ]] 00:07:40.256 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:07:40.256 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:40.256 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:40.256 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:07:40.256 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:40.256 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:40.256 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:40.256 Test configuration: 00:07:40.256 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:40.256 SPDK_TEST_NVMF=1 00:07:40.256 SPDK_TEST_NVME_CLI=1 00:07:40.256 SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:40.256 SPDK_TEST_NVMF_NICS=e810 00:07:40.256 SPDK_TEST_VFIOUSER=1 00:07:40.256 SPDK_RUN_UBSAN=1 00:07:40.256 NET_TYPE=phy 00:07:40.256 RUN_NIGHTLY= 09:20:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.256 09:20:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:40.256 09:20:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.256 09:20:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.256 09:20:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.256 09:20:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.256 09:20:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.256 09:20:33 -- paths/export.sh@5 -- $ export PATH 00:07:40.256 09:20:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.256 09:20:33 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:07:40.256 09:20:33 -- common/autobuild_common.sh@437 -- $ date +%s 00:07:40.256 09:20:33 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715844033.XXXXXX 00:07:40.256 09:20:33 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715844033.JS3nZE 00:07:40.256 09:20:33 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:07:40.256 09:20:33 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:07:40.256 09:20:33 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:07:40.256 09:20:33 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:07:40.257 09:20:33 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:07:40.257 09:20:33 -- common/autobuild_common.sh@453 -- $ get_config_params 00:07:40.257 09:20:33 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:07:40.257 09:20:33 -- common/autotest_common.sh@10 -- $ set +x 00:07:40.257 09:20:33 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:07:40.257 09:20:33 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:07:40.257 09:20:33 -- pm/common@17 -- $ local monitor 00:07:40.257 09:20:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:40.257 09:20:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:40.257 09:20:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:40.257 09:20:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:40.257 09:20:33 -- pm/common@21 -- $ date +%s 00:07:40.257 09:20:33 -- pm/common@21 -- $ date +%s 00:07:40.257 09:20:33 -- pm/common@25 -- $ sleep 1 00:07:40.257 09:20:33 -- pm/common@21 -- $ date +%s 00:07:40.257 09:20:33 -- pm/common@21 -- $ date +%s 00:07:40.257 09:20:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715844033 00:07:40.257 09:20:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715844033 00:07:40.257 09:20:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715844033 00:07:40.257 09:20:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715844033 00:07:40.257 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715844033_collect-vmstat.pm.log 00:07:40.519 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715844033_collect-cpu-load.pm.log 00:07:40.519 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715844033_collect-cpu-temp.pm.log 00:07:40.519 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715844033_collect-bmc-pm.bmc.pm.log 00:07:41.466 09:20:34 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:07:41.466 09:20:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:07:41.466 09:20:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:07:41.466 09:20:34 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:41.466 09:20:34 -- spdk/autobuild.sh@16 -- $ date -u 00:07:41.467 Thu May 16 07:20:34 AM UTC 2024 00:07:41.467 09:20:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:07:41.467 v24.05-pre-687-gcc94f3031 00:07:41.467 09:20:34 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:07:41.467 09:20:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:07:41.467 09:20:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:07:41.467 09:20:34 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:07:41.467 09:20:34 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:07:41.467 09:20:34 -- common/autotest_common.sh@10 -- $ set +x 00:07:41.467 ************************************ 00:07:41.467 START TEST ubsan 00:07:41.467 ************************************ 00:07:41.467 09:20:34 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:07:41.467 using ubsan 00:07:41.467 00:07:41.467 real 0m0.001s 00:07:41.467 user 0m0.000s 00:07:41.467 sys 0m0.000s 00:07:41.467 09:20:34 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:07:41.467 09:20:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:07:41.467 ************************************ 00:07:41.467 END TEST ubsan 00:07:41.467 ************************************ 00:07:41.467 09:20:34 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:07:41.467 09:20:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:07:41.467 09:20:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:07:41.467 09:20:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:07:41.467 09:20:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:07:41.467 09:20:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:07:41.467 09:20:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:07:41.467 09:20:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:07:41.467 09:20:34 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:07:42.039 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:42.039 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:42.984 Using 'verbs' RDMA provider 00:08:02.063 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:08:14.323 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:08:14.323 Creating mk/config.mk...done. 00:08:14.323 Creating mk/cc.flags.mk...done. 00:08:14.323 Type 'make' to build. 00:08:14.323 09:21:06 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:08:14.323 09:21:06 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:08:14.323 09:21:06 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:08:14.323 09:21:06 -- common/autotest_common.sh@10 -- $ set +x 00:08:14.323 ************************************ 00:08:14.323 START TEST make 00:08:14.323 ************************************ 00:08:14.323 09:21:06 make -- common/autotest_common.sh@1121 -- $ make -j144 00:08:14.323 make[1]: Nothing to be done for 'all'. 00:08:16.245 The Meson build system 00:08:16.245 Version: 1.3.1 00:08:16.245 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:08:16.245 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:08:16.245 Build type: native build 00:08:16.245 Project name: libvfio-user 00:08:16.245 Project version: 0.0.1 00:08:16.245 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:08:16.245 C linker for the host machine: cc ld.bfd 2.39-16 00:08:16.245 Host machine cpu family: x86_64 00:08:16.245 Host machine cpu: x86_64 00:08:16.245 Run-time dependency threads found: YES 00:08:16.245 Library dl found: YES 00:08:16.245 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:08:16.245 Run-time dependency json-c found: YES 0.17 00:08:16.245 Run-time dependency cmocka found: YES 1.1.7 00:08:16.245 Program pytest-3 found: NO 00:08:16.245 Program flake8 found: NO 00:08:16.245 Program misspell-fixer found: NO 00:08:16.245 Program restructuredtext-lint found: NO 00:08:16.245 Program valgrind found: YES (/usr/bin/valgrind) 00:08:16.245 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:08:16.245 Compiler for C supports arguments -Wmissing-declarations: YES 00:08:16.245 Compiler for C supports arguments -Wwrite-strings: YES 00:08:16.245 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:08:16.245 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:08:16.245 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:08:16.245 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:08:16.245 Build targets in project: 8 00:08:16.245 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:08:16.245 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:08:16.245 00:08:16.245 libvfio-user 0.0.1 00:08:16.245 00:08:16.245 User defined options 00:08:16.245 buildtype : debug 00:08:16.245 default_library: shared 00:08:16.245 libdir : /usr/local/lib 00:08:16.245 00:08:16.245 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:08:16.507 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:08:16.507 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:08:16.507 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:08:16.507 [3/37] Compiling C object samples/null.p/null.c.o 00:08:16.507 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:08:16.507 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:08:16.507 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:08:16.507 [7/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:08:16.507 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:08:16.507 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:08:16.507 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:08:16.507 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:08:16.507 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:08:16.507 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:08:16.768 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:08:16.768 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:08:16.768 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:08:16.768 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:08:16.768 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:08:16.768 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:08:16.768 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:08:16.768 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:08:16.768 [22/37] Compiling C object samples/server.p/server.c.o 00:08:16.768 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:08:16.768 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:08:16.768 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:08:16.768 [26/37] Compiling C object samples/client.p/client.c.o 00:08:16.768 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:08:16.768 [28/37] Linking target samples/client 00:08:16.768 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:08:16.768 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:08:16.768 [31/37] Linking target test/unit_tests 00:08:17.031 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:08:17.031 [33/37] Linking target samples/lspci 00:08:17.031 [34/37] Linking target samples/null 00:08:17.031 [35/37] Linking target samples/server 00:08:17.031 [36/37] Linking target samples/gpio-pci-idio-16 00:08:17.031 [37/37] Linking target samples/shadow_ioeventfd_server 00:08:17.031 INFO: autodetecting backend as ninja 00:08:17.031 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:08:17.031 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:08:17.295 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:08:17.295 ninja: no work to do. 00:08:22.593 The Meson build system 00:08:22.593 Version: 1.3.1 00:08:22.593 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:08:22.593 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:08:22.593 Build type: native build 00:08:22.593 Program cat found: YES (/usr/bin/cat) 00:08:22.593 Project name: DPDK 00:08:22.593 Project version: 24.03.0 00:08:22.593 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:08:22.593 C linker for the host machine: cc ld.bfd 2.39-16 00:08:22.593 Host machine cpu family: x86_64 00:08:22.593 Host machine cpu: x86_64 00:08:22.593 Message: ## Building in Developer Mode ## 00:08:22.593 Program pkg-config found: YES (/usr/bin/pkg-config) 00:08:22.593 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:08:22.593 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:08:22.593 Program python3 found: YES (/usr/bin/python3) 00:08:22.593 Program cat found: YES (/usr/bin/cat) 00:08:22.593 Compiler for C supports arguments -march=native: YES 00:08:22.593 Checking for size of "void *" : 8 00:08:22.593 Checking for size of "void *" : 8 (cached) 00:08:22.593 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:08:22.593 Library m found: YES 00:08:22.593 Library numa found: YES 00:08:22.593 Has header "numaif.h" : YES 00:08:22.593 Library fdt found: NO 00:08:22.593 Library execinfo found: NO 00:08:22.593 Has header "execinfo.h" : YES 00:08:22.593 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:08:22.593 Run-time dependency libarchive found: NO (tried pkgconfig) 00:08:22.593 Run-time dependency libbsd found: NO (tried pkgconfig) 00:08:22.593 Run-time dependency jansson found: NO (tried pkgconfig) 00:08:22.593 Run-time dependency openssl found: YES 3.0.9 00:08:22.593 Run-time dependency libpcap found: YES 1.10.4 00:08:22.593 Has header "pcap.h" with dependency libpcap: YES 00:08:22.593 Compiler for C supports arguments -Wcast-qual: YES 00:08:22.593 Compiler for C supports arguments -Wdeprecated: YES 00:08:22.593 Compiler for C supports arguments -Wformat: YES 00:08:22.593 Compiler for C supports arguments -Wformat-nonliteral: NO 00:08:22.593 Compiler for C supports arguments -Wformat-security: NO 00:08:22.593 Compiler for C supports arguments -Wmissing-declarations: YES 00:08:22.593 Compiler for C supports arguments -Wmissing-prototypes: YES 00:08:22.593 Compiler for C supports arguments -Wnested-externs: YES 00:08:22.593 Compiler for C supports arguments -Wold-style-definition: YES 00:08:22.593 Compiler for C supports arguments -Wpointer-arith: YES 00:08:22.593 Compiler for C supports arguments -Wsign-compare: YES 00:08:22.593 Compiler for C supports arguments -Wstrict-prototypes: YES 00:08:22.593 Compiler for C supports arguments -Wundef: YES 00:08:22.593 Compiler for C supports arguments -Wwrite-strings: YES 00:08:22.593 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:08:22.593 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:08:22.593 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:08:22.593 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:08:22.593 Program objdump found: YES (/usr/bin/objdump) 00:08:22.593 Compiler for C supports arguments -mavx512f: YES 00:08:22.593 Checking if "AVX512 checking" compiles: YES 00:08:22.593 Fetching value of define "__SSE4_2__" : 1 00:08:22.593 Fetching value of define "__AES__" : 1 00:08:22.593 Fetching value of define "__AVX__" : 1 00:08:22.593 Fetching value of define "__AVX2__" : 1 00:08:22.593 Fetching value of define "__AVX512BW__" : 1 00:08:22.593 Fetching value of define "__AVX512CD__" : 1 00:08:22.593 Fetching value of define "__AVX512DQ__" : 1 00:08:22.593 Fetching value of define "__AVX512F__" : 1 00:08:22.593 Fetching value of define "__AVX512VL__" : 1 00:08:22.593 Fetching value of define "__PCLMUL__" : 1 00:08:22.593 Fetching value of define "__RDRND__" : 1 00:08:22.593 Fetching value of define "__RDSEED__" : 1 00:08:22.593 Fetching value of define "__VPCLMULQDQ__" : 1 00:08:22.593 Fetching value of define "__znver1__" : (undefined) 00:08:22.593 Fetching value of define "__znver2__" : (undefined) 00:08:22.593 Fetching value of define "__znver3__" : (undefined) 00:08:22.593 Fetching value of define "__znver4__" : (undefined) 00:08:22.593 Compiler for C supports arguments -Wno-format-truncation: YES 00:08:22.593 Message: lib/log: Defining dependency "log" 00:08:22.593 Message: lib/kvargs: Defining dependency "kvargs" 00:08:22.593 Message: lib/telemetry: Defining dependency "telemetry" 00:08:22.593 Checking for function "getentropy" : NO 00:08:22.593 Message: lib/eal: Defining dependency "eal" 00:08:22.593 Message: lib/ring: Defining dependency "ring" 00:08:22.593 Message: lib/rcu: Defining dependency "rcu" 00:08:22.593 Message: lib/mempool: Defining dependency "mempool" 00:08:22.593 Message: lib/mbuf: Defining dependency "mbuf" 00:08:22.593 Fetching value of define "__PCLMUL__" : 1 (cached) 00:08:22.593 Fetching value of define "__AVX512F__" : 1 (cached) 00:08:22.593 Fetching value of define "__AVX512BW__" : 1 (cached) 00:08:22.593 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:08:22.593 Fetching value of define "__AVX512VL__" : 1 (cached) 00:08:22.593 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:08:22.593 Compiler for C supports arguments -mpclmul: YES 00:08:22.593 Compiler for C supports arguments -maes: YES 00:08:22.593 Compiler for C supports arguments -mavx512f: YES (cached) 00:08:22.593 Compiler for C supports arguments -mavx512bw: YES 00:08:22.593 Compiler for C supports arguments -mavx512dq: YES 00:08:22.593 Compiler for C supports arguments -mavx512vl: YES 00:08:22.593 Compiler for C supports arguments -mvpclmulqdq: YES 00:08:22.593 Compiler for C supports arguments -mavx2: YES 00:08:22.593 Compiler for C supports arguments -mavx: YES 00:08:22.593 Message: lib/net: Defining dependency "net" 00:08:22.593 Message: lib/meter: Defining dependency "meter" 00:08:22.593 Message: lib/ethdev: Defining dependency "ethdev" 00:08:22.593 Message: lib/pci: Defining dependency "pci" 00:08:22.593 Message: lib/cmdline: Defining dependency "cmdline" 00:08:22.593 Message: lib/hash: Defining dependency "hash" 00:08:22.593 Message: lib/timer: Defining dependency "timer" 00:08:22.593 Message: lib/compressdev: Defining dependency "compressdev" 00:08:22.593 Message: lib/cryptodev: Defining dependency "cryptodev" 00:08:22.593 Message: lib/dmadev: Defining dependency "dmadev" 00:08:22.593 Compiler for C supports arguments -Wno-cast-qual: YES 00:08:22.593 Message: lib/power: Defining dependency "power" 00:08:22.593 Message: lib/reorder: Defining dependency "reorder" 00:08:22.593 Message: lib/security: Defining dependency "security" 00:08:22.593 lib/meson.build:163: WARNING: Cannot disable mandatory library "stack" 00:08:22.593 Message: lib/stack: Defining dependency "stack" 00:08:22.593 Has header "linux/userfaultfd.h" : YES 00:08:22.593 Has header "linux/vduse.h" : YES 00:08:22.593 Message: lib/vhost: Defining dependency "vhost" 00:08:22.593 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:08:22.593 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:08:22.593 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:08:22.593 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:08:22.593 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:08:22.593 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:08:22.593 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:08:22.593 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:08:22.593 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:08:22.593 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:08:22.593 Program doxygen found: YES (/usr/bin/doxygen) 00:08:22.593 Configuring doxy-api-html.conf using configuration 00:08:22.593 Configuring doxy-api-man.conf using configuration 00:08:22.593 Program mandb found: YES (/usr/bin/mandb) 00:08:22.593 Program sphinx-build found: NO 00:08:22.593 Configuring rte_build_config.h using configuration 00:08:22.593 Message: 00:08:22.593 ================= 00:08:22.593 Applications Enabled 00:08:22.593 ================= 00:08:22.593 00:08:22.593 apps: 00:08:22.593 00:08:22.593 00:08:22.593 Message: 00:08:22.593 ================= 00:08:22.593 Libraries Enabled 00:08:22.593 ================= 00:08:22.593 00:08:22.593 libs: 00:08:22.593 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:08:22.593 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:08:22.593 cryptodev, dmadev, power, reorder, security, stack, vhost, 00:08:22.593 00:08:22.593 Message: 00:08:22.593 =============== 00:08:22.593 Drivers Enabled 00:08:22.593 =============== 00:08:22.593 00:08:22.593 common: 00:08:22.593 00:08:22.593 bus: 00:08:22.593 pci, vdev, 00:08:22.593 mempool: 00:08:22.593 ring, 00:08:22.593 dma: 00:08:22.593 00:08:22.593 net: 00:08:22.593 00:08:22.593 crypto: 00:08:22.593 00:08:22.593 compress: 00:08:22.593 00:08:22.593 vdpa: 00:08:22.593 00:08:22.593 00:08:22.593 Message: 00:08:22.593 ================= 00:08:22.593 Content Skipped 00:08:22.593 ================= 00:08:22.593 00:08:22.593 apps: 00:08:22.593 dumpcap: explicitly disabled via build config 00:08:22.593 graph: explicitly disabled via build config 00:08:22.593 pdump: explicitly disabled via build config 00:08:22.593 proc-info: explicitly disabled via build config 00:08:22.593 test-acl: explicitly disabled via build config 00:08:22.593 test-bbdev: explicitly disabled via build config 00:08:22.593 test-cmdline: explicitly disabled via build config 00:08:22.593 test-compress-perf: explicitly disabled via build config 00:08:22.593 test-crypto-perf: explicitly disabled via build config 00:08:22.593 test-dma-perf: explicitly disabled via build config 00:08:22.593 test-eventdev: explicitly disabled via build config 00:08:22.593 test-fib: explicitly disabled via build config 00:08:22.594 test-flow-perf: explicitly disabled via build config 00:08:22.594 test-gpudev: explicitly disabled via build config 00:08:22.594 test-mldev: explicitly disabled via build config 00:08:22.594 test-pipeline: explicitly disabled via build config 00:08:22.594 test-pmd: explicitly disabled via build config 00:08:22.594 test-regex: explicitly disabled via build config 00:08:22.594 test-sad: explicitly disabled via build config 00:08:22.594 test-security-perf: explicitly disabled via build config 00:08:22.594 00:08:22.594 libs: 00:08:22.594 argparse: explicitly disabled via build config 00:08:22.594 metrics: explicitly disabled via build config 00:08:22.594 acl: explicitly disabled via build config 00:08:22.594 bbdev: explicitly disabled via build config 00:08:22.594 bitratestats: explicitly disabled via build config 00:08:22.594 bpf: explicitly disabled via build config 00:08:22.594 cfgfile: explicitly disabled via build config 00:08:22.594 distributor: explicitly disabled via build config 00:08:22.594 efd: explicitly disabled via build config 00:08:22.594 eventdev: explicitly disabled via build config 00:08:22.594 dispatcher: explicitly disabled via build config 00:08:22.594 gpudev: explicitly disabled via build config 00:08:22.594 gro: explicitly disabled via build config 00:08:22.594 gso: explicitly disabled via build config 00:08:22.594 ip_frag: explicitly disabled via build config 00:08:22.594 jobstats: explicitly disabled via build config 00:08:22.594 latencystats: explicitly disabled via build config 00:08:22.594 lpm: explicitly disabled via build config 00:08:22.594 member: explicitly disabled via build config 00:08:22.594 pcapng: explicitly disabled via build config 00:08:22.594 rawdev: explicitly disabled via build config 00:08:22.594 regexdev: explicitly disabled via build config 00:08:22.594 mldev: explicitly disabled via build config 00:08:22.594 rib: explicitly disabled via build config 00:08:22.594 sched: explicitly disabled via build config 00:08:22.594 ipsec: explicitly disabled via build config 00:08:22.594 pdcp: explicitly disabled via build config 00:08:22.594 fib: explicitly disabled via build config 00:08:22.594 port: explicitly disabled via build config 00:08:22.594 pdump: explicitly disabled via build config 00:08:22.594 table: explicitly disabled via build config 00:08:22.594 pipeline: explicitly disabled via build config 00:08:22.594 graph: explicitly disabled via build config 00:08:22.594 node: explicitly disabled via build config 00:08:22.594 00:08:22.594 drivers: 00:08:22.594 common/cpt: not in enabled drivers build config 00:08:22.594 common/dpaax: not in enabled drivers build config 00:08:22.594 common/iavf: not in enabled drivers build config 00:08:22.594 common/idpf: not in enabled drivers build config 00:08:22.594 common/ionic: not in enabled drivers build config 00:08:22.594 common/mvep: not in enabled drivers build config 00:08:22.594 common/octeontx: not in enabled drivers build config 00:08:22.594 bus/auxiliary: not in enabled drivers build config 00:08:22.594 bus/cdx: not in enabled drivers build config 00:08:22.594 bus/dpaa: not in enabled drivers build config 00:08:22.594 bus/fslmc: not in enabled drivers build config 00:08:22.594 bus/ifpga: not in enabled drivers build config 00:08:22.594 bus/platform: not in enabled drivers build config 00:08:22.594 bus/uacce: not in enabled drivers build config 00:08:22.594 bus/vmbus: not in enabled drivers build config 00:08:22.594 common/cnxk: not in enabled drivers build config 00:08:22.594 common/mlx5: not in enabled drivers build config 00:08:22.594 common/nfp: not in enabled drivers build config 00:08:22.594 common/nitrox: not in enabled drivers build config 00:08:22.594 common/qat: not in enabled drivers build config 00:08:22.594 common/sfc_efx: not in enabled drivers build config 00:08:22.594 mempool/bucket: not in enabled drivers build config 00:08:22.594 mempool/cnxk: not in enabled drivers build config 00:08:22.594 mempool/dpaa: not in enabled drivers build config 00:08:22.594 mempool/dpaa2: not in enabled drivers build config 00:08:22.594 mempool/octeontx: not in enabled drivers build config 00:08:22.594 mempool/stack: not in enabled drivers build config 00:08:22.594 dma/cnxk: not in enabled drivers build config 00:08:22.594 dma/dpaa: not in enabled drivers build config 00:08:22.594 dma/dpaa2: not in enabled drivers build config 00:08:22.594 dma/hisilicon: not in enabled drivers build config 00:08:22.594 dma/idxd: not in enabled drivers build config 00:08:22.594 dma/ioat: not in enabled drivers build config 00:08:22.594 dma/skeleton: not in enabled drivers build config 00:08:22.594 net/af_packet: not in enabled drivers build config 00:08:22.594 net/af_xdp: not in enabled drivers build config 00:08:22.594 net/ark: not in enabled drivers build config 00:08:22.594 net/atlantic: not in enabled drivers build config 00:08:22.594 net/avp: not in enabled drivers build config 00:08:22.594 net/axgbe: not in enabled drivers build config 00:08:22.594 net/bnx2x: not in enabled drivers build config 00:08:22.594 net/bnxt: not in enabled drivers build config 00:08:22.594 net/bonding: not in enabled drivers build config 00:08:22.594 net/cnxk: not in enabled drivers build config 00:08:22.594 net/cpfl: not in enabled drivers build config 00:08:22.594 net/cxgbe: not in enabled drivers build config 00:08:22.594 net/dpaa: not in enabled drivers build config 00:08:22.594 net/dpaa2: not in enabled drivers build config 00:08:22.594 net/e1000: not in enabled drivers build config 00:08:22.594 net/ena: not in enabled drivers build config 00:08:22.594 net/enetc: not in enabled drivers build config 00:08:22.594 net/enetfec: not in enabled drivers build config 00:08:22.594 net/enic: not in enabled drivers build config 00:08:22.594 net/failsafe: not in enabled drivers build config 00:08:22.594 net/fm10k: not in enabled drivers build config 00:08:22.594 net/gve: not in enabled drivers build config 00:08:22.594 net/hinic: not in enabled drivers build config 00:08:22.594 net/hns3: not in enabled drivers build config 00:08:22.594 net/i40e: not in enabled drivers build config 00:08:22.594 net/iavf: not in enabled drivers build config 00:08:22.594 net/ice: not in enabled drivers build config 00:08:22.594 net/idpf: not in enabled drivers build config 00:08:22.594 net/igc: not in enabled drivers build config 00:08:22.594 net/ionic: not in enabled drivers build config 00:08:22.594 net/ipn3ke: not in enabled drivers build config 00:08:22.594 net/ixgbe: not in enabled drivers build config 00:08:22.594 net/mana: not in enabled drivers build config 00:08:22.594 net/memif: not in enabled drivers build config 00:08:22.594 net/mlx4: not in enabled drivers build config 00:08:22.594 net/mlx5: not in enabled drivers build config 00:08:22.594 net/mvneta: not in enabled drivers build config 00:08:22.594 net/mvpp2: not in enabled drivers build config 00:08:22.594 net/netvsc: not in enabled drivers build config 00:08:22.594 net/nfb: not in enabled drivers build config 00:08:22.594 net/nfp: not in enabled drivers build config 00:08:22.594 net/ngbe: not in enabled drivers build config 00:08:22.594 net/null: not in enabled drivers build config 00:08:22.594 net/octeontx: not in enabled drivers build config 00:08:22.594 net/octeon_ep: not in enabled drivers build config 00:08:22.594 net/pcap: not in enabled drivers build config 00:08:22.594 net/pfe: not in enabled drivers build config 00:08:22.594 net/qede: not in enabled drivers build config 00:08:22.594 net/ring: not in enabled drivers build config 00:08:22.594 net/sfc: not in enabled drivers build config 00:08:22.594 net/softnic: not in enabled drivers build config 00:08:22.594 net/tap: not in enabled drivers build config 00:08:22.594 net/thunderx: not in enabled drivers build config 00:08:22.594 net/txgbe: not in enabled drivers build config 00:08:22.594 net/vdev_netvsc: not in enabled drivers build config 00:08:22.594 net/vhost: not in enabled drivers build config 00:08:22.594 net/virtio: not in enabled drivers build config 00:08:22.594 net/vmxnet3: not in enabled drivers build config 00:08:22.594 raw/*: missing internal dependency, "rawdev" 00:08:22.594 crypto/armv8: not in enabled drivers build config 00:08:22.594 crypto/bcmfs: not in enabled drivers build config 00:08:22.594 crypto/caam_jr: not in enabled drivers build config 00:08:22.594 crypto/ccp: not in enabled drivers build config 00:08:22.594 crypto/cnxk: not in enabled drivers build config 00:08:22.594 crypto/dpaa_sec: not in enabled drivers build config 00:08:22.594 crypto/dpaa2_sec: not in enabled drivers build config 00:08:22.594 crypto/ipsec_mb: not in enabled drivers build config 00:08:22.594 crypto/mlx5: not in enabled drivers build config 00:08:22.594 crypto/mvsam: not in enabled drivers build config 00:08:22.594 crypto/nitrox: not in enabled drivers build config 00:08:22.594 crypto/null: not in enabled drivers build config 00:08:22.594 crypto/octeontx: not in enabled drivers build config 00:08:22.594 crypto/openssl: not in enabled drivers build config 00:08:22.594 crypto/scheduler: not in enabled drivers build config 00:08:22.594 crypto/uadk: not in enabled drivers build config 00:08:22.594 crypto/virtio: not in enabled drivers build config 00:08:22.594 compress/isal: not in enabled drivers build config 00:08:22.594 compress/mlx5: not in enabled drivers build config 00:08:22.594 compress/nitrox: not in enabled drivers build config 00:08:22.594 compress/octeontx: not in enabled drivers build config 00:08:22.594 compress/zlib: not in enabled drivers build config 00:08:22.594 regex/*: missing internal dependency, "regexdev" 00:08:22.594 ml/*: missing internal dependency, "mldev" 00:08:22.594 vdpa/ifc: not in enabled drivers build config 00:08:22.594 vdpa/mlx5: not in enabled drivers build config 00:08:22.594 vdpa/nfp: not in enabled drivers build config 00:08:22.594 vdpa/sfc: not in enabled drivers build config 00:08:22.594 event/*: missing internal dependency, "eventdev" 00:08:22.594 baseband/*: missing internal dependency, "bbdev" 00:08:22.594 gpu/*: missing internal dependency, "gpudev" 00:08:22.594 00:08:22.594 00:08:22.594 Build targets in project: 87 00:08:22.594 00:08:22.594 DPDK 24.03.0 00:08:22.594 00:08:22.594 User defined options 00:08:22.594 buildtype : debug 00:08:22.594 default_library : shared 00:08:22.594 libdir : lib 00:08:22.594 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:22.594 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:08:22.594 c_link_args : 00:08:22.594 cpu_instruction_set: native 00:08:22.594 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:08:22.594 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib,argparse 00:08:22.594 enable_docs : false 00:08:22.594 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:08:22.594 enable_kmods : false 00:08:22.594 tests : false 00:08:22.594 00:08:22.594 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:08:22.855 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:08:23.121 [1/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:08:23.121 [2/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:08:23.121 [3/273] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:08:23.121 [4/273] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:08:23.121 [5/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:08:23.121 [6/273] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:08:23.121 [7/273] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:08:23.121 [8/273] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:08:23.121 [9/273] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:08:23.121 [10/273] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:08:23.121 [11/273] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:08:23.122 [12/273] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:08:23.122 [13/273] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:08:23.122 [14/273] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:08:23.388 [15/273] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:08:23.388 [16/273] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:08:23.388 [17/273] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:08:23.388 [18/273] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:08:23.388 [19/273] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:08:23.388 [20/273] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:08:23.388 [21/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:08:23.388 [22/273] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:08:23.388 [23/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:08:23.388 [24/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:08:23.388 [25/273] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:08:23.388 [26/273] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:08:23.388 [27/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:08:23.388 [28/273] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:08:23.388 [29/273] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:08:23.388 [30/273] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:08:23.388 [31/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:08:23.388 [32/273] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:08:23.388 [33/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:08:23.388 [34/273] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:08:23.388 [35/273] Linking static target lib/librte_kvargs.a 00:08:23.388 [36/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:08:23.388 [37/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:08:23.388 [38/273] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:08:23.388 [39/273] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:08:23.388 [40/273] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:08:23.388 [41/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:08:23.388 [42/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:08:23.388 [43/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:08:23.388 [44/273] Compiling C object lib/librte_log.a.p/log_log.c.o 00:08:23.388 [45/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:08:23.388 [46/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:08:23.388 [47/273] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:08:23.388 [48/273] Linking static target lib/librte_log.a 00:08:23.388 [49/273] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:08:23.388 [50/273] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:08:23.388 [51/273] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:08:23.388 [52/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:08:23.388 [53/273] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:08:23.388 [54/273] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:08:23.388 [55/273] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:08:23.388 [56/273] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:08:23.388 [57/273] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:08:23.388 [58/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:08:23.388 [59/273] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:08:23.388 [60/273] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:08:23.388 [61/273] Linking static target lib/librte_pci.a 00:08:23.388 [62/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:08:23.388 [63/273] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:08:23.388 [64/273] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:08:23.388 [65/273] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:08:23.388 [66/273] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:08:23.388 [67/273] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:08:23.388 [68/273] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:08:23.649 [69/273] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:08:23.649 [70/273] Linking static target lib/librte_ring.a 00:08:23.649 [71/273] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:08:23.649 [72/273] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:08:23.649 [73/273] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:08:23.649 [74/273] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:08:23.649 [75/273] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:08:23.649 [76/273] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:08:23.649 [77/273] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:08:23.649 [78/273] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:08:23.649 [79/273] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:08:23.649 [80/273] Linking static target lib/librte_timer.a 00:08:23.649 [81/273] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:08:23.649 [82/273] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:08:23.649 [83/273] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:08:23.649 [84/273] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:08:23.649 [85/273] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:08:23.649 [86/273] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:08:23.649 [87/273] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:08:23.649 [88/273] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:08:23.649 [89/273] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:08:23.649 [90/273] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:08:23.649 [91/273] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:08:23.649 [92/273] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:08:23.649 [93/273] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:08:23.649 [94/273] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:08:23.649 [95/273] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:08:23.649 [96/273] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:08:23.649 [97/273] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:08:23.650 [98/273] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:23.650 [99/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:08:23.650 [100/273] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:08:23.650 [101/273] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:08:23.650 [102/273] Linking static target lib/librte_mbuf.a 00:08:23.650 [103/273] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:08:23.650 [104/273] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:08:23.650 [105/273] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:08:23.650 [106/273] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:08:23.650 [107/273] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:08:23.650 [108/273] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:08:23.650 [109/273] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:08:23.650 [110/273] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:08:23.650 [111/273] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:08:23.650 [112/273] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:08:23.650 [113/273] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:08:23.650 [114/273] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:08:23.650 [115/273] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:08:23.650 [116/273] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:08:23.650 [117/273] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:08:23.650 [118/273] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:08:23.650 [119/273] Linking static target lib/librte_stack.a 00:08:23.911 [120/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:08:23.911 [121/273] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:08:23.911 [122/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:08:23.911 [123/273] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:08:23.911 [124/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:08:23.911 [125/273] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:08:23.911 [126/273] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:08:23.911 [127/273] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:08:23.911 [128/273] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:08:23.911 [129/273] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:08:23.911 [130/273] Linking static target drivers/libtmp_rte_mempool_ring.a 00:08:23.911 [131/273] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:08:23.911 [132/273] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:08:23.911 [133/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:08:23.911 [134/273] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:08:23.911 [135/273] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:08:23.911 [136/273] Linking static target drivers/libtmp_rte_bus_vdev.a 00:08:23.911 [137/273] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:08:23.911 [138/273] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:08:23.911 [139/273] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:08:23.911 [140/273] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:08:23.911 [141/273] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:08:23.911 [142/273] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:08:23.911 [143/273] Linking static target lib/librte_telemetry.a 00:08:23.911 [144/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:08:23.911 [145/273] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:08:23.911 [146/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:08:23.911 [147/273] Linking static target lib/librte_meter.a 00:08:23.911 [148/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:08:23.911 [149/273] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:08:23.911 [150/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:08:23.911 [151/273] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:08:23.911 [152/273] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:08:23.911 [153/273] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:08:23.911 [154/273] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:08:23.911 [155/273] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:08:23.911 [156/273] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:08:23.911 [157/273] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:08:23.911 [158/273] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:08:23.911 [159/273] Linking static target lib/librte_mempool.a 00:08:23.911 [160/273] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:08:23.911 [161/273] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:08:23.911 [162/273] Linking static target lib/librte_cmdline.a 00:08:23.911 [163/273] Linking static target lib/librte_rcu.a 00:08:23.911 [164/273] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:08:23.911 [165/273] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:08:23.911 [166/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:08:23.911 [167/273] Linking static target drivers/libtmp_rte_bus_pci.a 00:08:23.911 [168/273] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:08:23.911 [169/273] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:08:23.911 [170/273] Linking target lib/librte_log.so.24.1 00:08:23.911 [171/273] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:08:23.911 [172/273] Linking static target lib/librte_eal.a 00:08:23.911 [173/273] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:08:23.911 [174/273] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:08:23.911 [175/273] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:08:23.911 [176/273] Linking static target lib/librte_compressdev.a 00:08:23.911 [177/273] Linking static target lib/librte_dmadev.a 00:08:23.911 [178/273] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:08:24.171 [179/273] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:08:24.171 [180/273] Linking static target lib/librte_reorder.a 00:08:24.171 [181/273] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:08:24.171 [182/273] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.171 [183/273] Linking static target lib/librte_power.a 00:08:24.171 [184/273] Linking static target lib/librte_net.a 00:08:24.171 [185/273] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:08:24.171 [186/273] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:08:24.171 [187/273] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:08:24.171 [188/273] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:08:24.171 [189/273] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:08:24.171 [190/273] Linking static target lib/librte_hash.a 00:08:24.171 [191/273] Linking static target lib/librte_cryptodev.a 00:08:24.171 [192/273] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:08:24.171 [193/273] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:24.171 [194/273] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:24.171 [195/273] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:08:24.171 [196/273] Linking static target drivers/librte_mempool_ring.a 00:08:24.171 [197/273] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:08:24.171 [198/273] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:08:24.171 [199/273] Linking static target lib/librte_security.a 00:08:24.171 [200/273] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:24.171 [201/273] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:24.171 [202/273] Linking static target drivers/librte_bus_vdev.a 00:08:24.171 [203/273] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:08:24.171 [204/273] Linking target lib/librte_kvargs.so.24.1 00:08:24.171 [205/273] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:24.171 [206/273] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:24.171 [207/273] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.171 [208/273] Linking static target drivers/librte_bus_pci.a 00:08:24.171 [209/273] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:08:24.171 [210/273] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:08:24.171 [211/273] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:08:24.171 [212/273] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:08:24.432 [213/273] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:08:24.432 [214/273] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.432 [215/273] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.432 [216/273] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.432 [217/273] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.432 [218/273] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.432 [219/273] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.693 [220/273] Linking target lib/librte_telemetry.so.24.1 00:08:24.693 [221/273] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:08:24.693 [222/273] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.693 [223/273] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.954 [224/273] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.954 [225/273] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:08:24.954 [226/273] Linking static target lib/librte_ethdev.a 00:08:24.954 [227/273] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.954 [228/273] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.954 [229/273] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:08:25.217 [230/273] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:08:25.217 [231/273] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:25.790 [232/273] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:08:25.790 [233/273] Linking static target lib/librte_vhost.a 00:08:26.364 [234/273] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:28.284 [235/273] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:08:34.878 [236/273] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.823 [237/273] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.823 [238/273] Linking target lib/librte_eal.so.24.1 00:08:36.085 [239/273] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:08:36.085 [240/273] Linking target lib/librte_ring.so.24.1 00:08:36.085 [241/273] Linking target lib/librte_meter.so.24.1 00:08:36.085 [242/273] Linking target lib/librte_pci.so.24.1 00:08:36.085 [243/273] Linking target lib/librte_timer.so.24.1 00:08:36.085 [244/273] Linking target lib/librte_stack.so.24.1 00:08:36.085 [245/273] Linking target lib/librte_dmadev.so.24.1 00:08:36.085 [246/273] Linking target drivers/librte_bus_vdev.so.24.1 00:08:36.347 [247/273] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:08:36.347 [248/273] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:08:36.347 [249/273] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:08:36.347 [250/273] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:08:36.347 [251/273] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:08:36.347 [252/273] Linking target lib/librte_rcu.so.24.1 00:08:36.347 [253/273] Linking target lib/librte_mempool.so.24.1 00:08:36.347 [254/273] Linking target drivers/librte_bus_pci.so.24.1 00:08:36.609 [255/273] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:08:36.609 [256/273] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:08:36.609 [257/273] Linking target lib/librte_mbuf.so.24.1 00:08:36.609 [258/273] Linking target drivers/librte_mempool_ring.so.24.1 00:08:36.609 [259/273] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:08:36.871 [260/273] Linking target lib/librte_compressdev.so.24.1 00:08:36.871 [261/273] Linking target lib/librte_net.so.24.1 00:08:36.871 [262/273] Linking target lib/librte_reorder.so.24.1 00:08:36.871 [263/273] Linking target lib/librte_cryptodev.so.24.1 00:08:36.871 [264/273] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:08:36.871 [265/273] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:08:36.871 [266/273] Linking target lib/librte_hash.so.24.1 00:08:36.871 [267/273] Linking target lib/librte_security.so.24.1 00:08:36.871 [268/273] Linking target lib/librte_cmdline.so.24.1 00:08:36.871 [269/273] Linking target lib/librte_ethdev.so.24.1 00:08:37.132 [270/273] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:08:37.132 [271/273] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:08:37.132 [272/273] Linking target lib/librte_power.so.24.1 00:08:37.132 [273/273] Linking target lib/librte_vhost.so.24.1 00:08:37.132 INFO: autodetecting backend as ninja 00:08:37.132 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:08:38.522 CC lib/log/log.o 00:08:38.522 CC lib/log/log_flags.o 00:08:38.522 CC lib/log/log_deprecated.o 00:08:38.522 CC lib/ut/ut.o 00:08:38.522 CC lib/ut_mock/mock.o 00:08:38.522 LIB libspdk_ut_mock.a 00:08:38.522 LIB libspdk_ut.a 00:08:38.522 LIB libspdk_log.a 00:08:38.522 SO libspdk_ut_mock.so.6.0 00:08:38.522 SO libspdk_ut.so.2.0 00:08:38.522 SO libspdk_log.so.7.0 00:08:38.522 SYMLINK libspdk_ut_mock.so 00:08:38.784 SYMLINK libspdk_ut.so 00:08:38.784 SYMLINK libspdk_log.so 00:08:39.046 CC lib/ioat/ioat.o 00:08:39.046 CC lib/util/base64.o 00:08:39.046 CC lib/util/bit_array.o 00:08:39.046 CC lib/util/cpuset.o 00:08:39.046 CC lib/dma/dma.o 00:08:39.046 CXX lib/trace_parser/trace.o 00:08:39.046 CC lib/util/crc16.o 00:08:39.046 CC lib/util/crc32.o 00:08:39.046 CC lib/util/crc32c.o 00:08:39.046 CC lib/util/crc32_ieee.o 00:08:39.046 CC lib/util/crc64.o 00:08:39.046 CC lib/util/dif.o 00:08:39.046 CC lib/util/fd.o 00:08:39.046 CC lib/util/iov.o 00:08:39.046 CC lib/util/file.o 00:08:39.046 CC lib/util/hexlify.o 00:08:39.046 CC lib/util/pipe.o 00:08:39.046 CC lib/util/math.o 00:08:39.046 CC lib/util/strerror_tls.o 00:08:39.046 CC lib/util/string.o 00:08:39.046 CC lib/util/uuid.o 00:08:39.046 CC lib/util/fd_group.o 00:08:39.046 CC lib/util/xor.o 00:08:39.046 CC lib/util/zipf.o 00:08:39.309 CC lib/vfio_user/host/vfio_user_pci.o 00:08:39.309 CC lib/vfio_user/host/vfio_user.o 00:08:39.309 LIB libspdk_dma.a 00:08:39.309 SO libspdk_dma.so.4.0 00:08:39.309 LIB libspdk_ioat.a 00:08:39.309 SYMLINK libspdk_dma.so 00:08:39.309 SO libspdk_ioat.so.7.0 00:08:39.309 SYMLINK libspdk_ioat.so 00:08:39.571 LIB libspdk_vfio_user.a 00:08:39.571 SO libspdk_vfio_user.so.5.0 00:08:39.571 LIB libspdk_util.a 00:08:39.571 SYMLINK libspdk_vfio_user.so 00:08:39.571 SO libspdk_util.so.9.0 00:08:39.833 SYMLINK libspdk_util.so 00:08:40.096 CC lib/idxd/idxd.o 00:08:40.096 CC lib/idxd/idxd_user.o 00:08:40.096 CC lib/conf/conf.o 00:08:40.096 CC lib/json/json_parse.o 00:08:40.096 CC lib/json/json_util.o 00:08:40.096 CC lib/json/json_write.o 00:08:40.096 CC lib/rdma/common.o 00:08:40.096 CC lib/vmd/vmd.o 00:08:40.096 CC lib/rdma/rdma_verbs.o 00:08:40.096 CC lib/vmd/led.o 00:08:40.096 CC lib/env_dpdk/env.o 00:08:40.096 CC lib/env_dpdk/memory.o 00:08:40.096 CC lib/env_dpdk/pci.o 00:08:40.096 CC lib/env_dpdk/init.o 00:08:40.096 CC lib/env_dpdk/threads.o 00:08:40.096 CC lib/env_dpdk/pci_ioat.o 00:08:40.096 CC lib/env_dpdk/pci_virtio.o 00:08:40.096 CC lib/env_dpdk/pci_vmd.o 00:08:40.096 CC lib/env_dpdk/pci_idxd.o 00:08:40.096 CC lib/env_dpdk/pci_event.o 00:08:40.096 CC lib/env_dpdk/sigbus_handler.o 00:08:40.096 CC lib/env_dpdk/pci_dpdk.o 00:08:40.096 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:40.096 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:40.364 LIB libspdk_conf.a 00:08:40.364 SO libspdk_conf.so.6.0 00:08:40.364 LIB libspdk_rdma.a 00:08:40.364 LIB libspdk_json.a 00:08:40.364 SYMLINK libspdk_conf.so 00:08:40.364 SO libspdk_rdma.so.6.0 00:08:40.364 SO libspdk_json.so.6.0 00:08:40.629 SYMLINK libspdk_rdma.so 00:08:40.629 SYMLINK libspdk_json.so 00:08:40.629 LIB libspdk_idxd.a 00:08:40.629 LIB libspdk_trace_parser.a 00:08:40.629 SO libspdk_trace_parser.so.5.0 00:08:40.629 SO libspdk_idxd.so.12.0 00:08:40.629 LIB libspdk_vmd.a 00:08:40.629 SYMLINK libspdk_idxd.so 00:08:40.891 SO libspdk_vmd.so.6.0 00:08:40.891 SYMLINK libspdk_trace_parser.so 00:08:40.891 SYMLINK libspdk_vmd.so 00:08:40.891 CC lib/jsonrpc/jsonrpc_server.o 00:08:40.891 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:40.891 CC lib/jsonrpc/jsonrpc_client.o 00:08:40.891 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:41.154 LIB libspdk_jsonrpc.a 00:08:41.154 SO libspdk_jsonrpc.so.6.0 00:08:41.154 SYMLINK libspdk_jsonrpc.so 00:08:41.417 LIB libspdk_env_dpdk.a 00:08:41.417 SO libspdk_env_dpdk.so.14.0 00:08:41.417 SYMLINK libspdk_env_dpdk.so 00:08:41.679 CC lib/rpc/rpc.o 00:08:41.941 LIB libspdk_rpc.a 00:08:41.941 SO libspdk_rpc.so.6.0 00:08:41.941 SYMLINK libspdk_rpc.so 00:08:42.203 CC lib/notify/notify.o 00:08:42.203 CC lib/notify/notify_rpc.o 00:08:42.203 CC lib/trace/trace.o 00:08:42.203 CC lib/keyring/keyring.o 00:08:42.203 CC lib/trace/trace_flags.o 00:08:42.203 CC lib/trace/trace_rpc.o 00:08:42.203 CC lib/keyring/keyring_rpc.o 00:08:42.466 LIB libspdk_notify.a 00:08:42.466 SO libspdk_notify.so.6.0 00:08:42.466 LIB libspdk_keyring.a 00:08:42.466 LIB libspdk_trace.a 00:08:42.466 SYMLINK libspdk_notify.so 00:08:42.728 SO libspdk_keyring.so.1.0 00:08:42.728 SO libspdk_trace.so.10.0 00:08:42.728 SYMLINK libspdk_keyring.so 00:08:42.728 SYMLINK libspdk_trace.so 00:08:42.990 CC lib/thread/thread.o 00:08:42.990 CC lib/thread/iobuf.o 00:08:42.990 CC lib/sock/sock.o 00:08:42.990 CC lib/sock/sock_rpc.o 00:08:43.565 LIB libspdk_sock.a 00:08:43.565 SO libspdk_sock.so.9.0 00:08:43.565 SYMLINK libspdk_sock.so 00:08:43.827 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:43.827 CC lib/nvme/nvme_ctrlr.o 00:08:43.827 CC lib/nvme/nvme_fabric.o 00:08:43.827 CC lib/nvme/nvme_ns_cmd.o 00:08:43.827 CC lib/nvme/nvme_ns.o 00:08:43.827 CC lib/nvme/nvme_pcie_common.o 00:08:43.827 CC lib/nvme/nvme_pcie.o 00:08:43.827 CC lib/nvme/nvme_qpair.o 00:08:43.827 CC lib/nvme/nvme.o 00:08:43.827 CC lib/nvme/nvme_quirks.o 00:08:43.827 CC lib/nvme/nvme_transport.o 00:08:43.827 CC lib/nvme/nvme_discovery.o 00:08:43.827 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:43.827 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:43.827 CC lib/nvme/nvme_tcp.o 00:08:43.827 CC lib/nvme/nvme_opal.o 00:08:43.827 CC lib/nvme/nvme_io_msg.o 00:08:43.827 CC lib/nvme/nvme_poll_group.o 00:08:43.827 CC lib/nvme/nvme_zns.o 00:08:43.827 CC lib/nvme/nvme_stubs.o 00:08:43.827 CC lib/nvme/nvme_auth.o 00:08:43.827 CC lib/nvme/nvme_cuse.o 00:08:43.827 CC lib/nvme/nvme_vfio_user.o 00:08:43.827 CC lib/nvme/nvme_rdma.o 00:08:44.401 LIB libspdk_thread.a 00:08:44.401 SO libspdk_thread.so.10.0 00:08:44.401 SYMLINK libspdk_thread.so 00:08:44.664 CC lib/virtio/virtio.o 00:08:44.664 CC lib/virtio/virtio_vhost_user.o 00:08:44.664 CC lib/virtio/virtio_vfio_user.o 00:08:44.664 CC lib/virtio/virtio_pci.o 00:08:44.664 CC lib/blob/blobstore.o 00:08:44.664 CC lib/blob/request.o 00:08:44.664 CC lib/blob/zeroes.o 00:08:44.664 CC lib/blob/blob_bs_dev.o 00:08:44.664 CC lib/accel/accel.o 00:08:44.664 CC lib/init/json_config.o 00:08:44.664 CC lib/accel/accel_rpc.o 00:08:44.664 CC lib/init/subsystem.o 00:08:44.664 CC lib/accel/accel_sw.o 00:08:44.664 CC lib/init/subsystem_rpc.o 00:08:44.664 CC lib/init/rpc.o 00:08:44.664 CC lib/vfu_tgt/tgt_endpoint.o 00:08:44.664 CC lib/vfu_tgt/tgt_rpc.o 00:08:44.925 LIB libspdk_init.a 00:08:45.187 SO libspdk_init.so.5.0 00:08:45.187 LIB libspdk_virtio.a 00:08:45.187 LIB libspdk_vfu_tgt.a 00:08:45.187 SO libspdk_virtio.so.7.0 00:08:45.187 SO libspdk_vfu_tgt.so.3.0 00:08:45.187 SYMLINK libspdk_init.so 00:08:45.187 SYMLINK libspdk_vfu_tgt.so 00:08:45.187 SYMLINK libspdk_virtio.so 00:08:45.450 CC lib/event/app.o 00:08:45.450 CC lib/event/reactor.o 00:08:45.450 CC lib/event/log_rpc.o 00:08:45.450 CC lib/event/app_rpc.o 00:08:45.450 CC lib/event/scheduler_static.o 00:08:45.713 LIB libspdk_accel.a 00:08:45.713 SO libspdk_accel.so.15.0 00:08:45.713 LIB libspdk_nvme.a 00:08:45.713 SYMLINK libspdk_accel.so 00:08:45.976 LIB libspdk_event.a 00:08:45.976 SO libspdk_nvme.so.13.0 00:08:45.976 SO libspdk_event.so.13.0 00:08:45.976 SYMLINK libspdk_event.so 00:08:46.239 CC lib/bdev/bdev.o 00:08:46.239 CC lib/bdev/bdev_rpc.o 00:08:46.239 CC lib/bdev/bdev_zone.o 00:08:46.239 CC lib/bdev/part.o 00:08:46.239 CC lib/bdev/scsi_nvme.o 00:08:46.239 SYMLINK libspdk_nvme.so 00:08:47.631 LIB libspdk_blob.a 00:08:47.631 SO libspdk_blob.so.11.0 00:08:47.631 SYMLINK libspdk_blob.so 00:08:47.894 CC lib/blobfs/blobfs.o 00:08:47.894 CC lib/blobfs/tree.o 00:08:47.894 CC lib/lvol/lvol.o 00:08:48.469 LIB libspdk_bdev.a 00:08:48.469 SO libspdk_bdev.so.15.0 00:08:48.469 SYMLINK libspdk_bdev.so 00:08:48.469 LIB libspdk_blobfs.a 00:08:48.731 SO libspdk_blobfs.so.10.0 00:08:48.731 LIB libspdk_lvol.a 00:08:48.731 SO libspdk_lvol.so.10.0 00:08:48.731 SYMLINK libspdk_blobfs.so 00:08:48.731 SYMLINK libspdk_lvol.so 00:08:48.991 CC lib/nvmf/ctrlr.o 00:08:48.991 CC lib/nvmf/ctrlr_discovery.o 00:08:48.991 CC lib/nvmf/ctrlr_bdev.o 00:08:48.991 CC lib/scsi/dev.o 00:08:48.991 CC lib/nvmf/subsystem.o 00:08:48.991 CC lib/scsi/lun.o 00:08:48.991 CC lib/nvmf/nvmf.o 00:08:48.991 CC lib/scsi/port.o 00:08:48.991 CC lib/nbd/nbd.o 00:08:48.991 CC lib/nvmf/nvmf_rpc.o 00:08:48.991 CC lib/ftl/ftl_core.o 00:08:48.991 CC lib/scsi/scsi.o 00:08:48.991 CC lib/nvmf/transport.o 00:08:48.991 CC lib/nbd/nbd_rpc.o 00:08:48.991 CC lib/ftl/ftl_init.o 00:08:48.991 CC lib/nvmf/tcp.o 00:08:48.991 CC lib/scsi/scsi_bdev.o 00:08:48.991 CC lib/ublk/ublk.o 00:08:48.991 CC lib/nvmf/stubs.o 00:08:48.991 CC lib/ftl/ftl_layout.o 00:08:48.991 CC lib/ublk/ublk_rpc.o 00:08:48.991 CC lib/scsi/scsi_pr.o 00:08:48.991 CC lib/nvmf/mdns_server.o 00:08:48.991 CC lib/ftl/ftl_debug.o 00:08:48.991 CC lib/scsi/scsi_rpc.o 00:08:48.991 CC lib/ftl/ftl_io.o 00:08:48.991 CC lib/nvmf/vfio_user.o 00:08:48.991 CC lib/nvmf/rdma.o 00:08:48.991 CC lib/scsi/task.o 00:08:48.991 CC lib/nvmf/auth.o 00:08:48.992 CC lib/ftl/ftl_sb.o 00:08:48.992 CC lib/ftl/ftl_l2p.o 00:08:48.992 CC lib/ftl/ftl_l2p_flat.o 00:08:48.992 CC lib/ftl/ftl_nv_cache.o 00:08:48.992 CC lib/ftl/ftl_band.o 00:08:48.992 CC lib/ftl/ftl_writer.o 00:08:48.992 CC lib/ftl/ftl_band_ops.o 00:08:48.992 CC lib/ftl/ftl_rq.o 00:08:48.992 CC lib/ftl/ftl_reloc.o 00:08:48.992 CC lib/ftl/ftl_l2p_cache.o 00:08:48.992 CC lib/ftl/ftl_p2l.o 00:08:48.992 CC lib/ftl/mngt/ftl_mngt.o 00:08:48.992 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:48.992 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:48.992 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:48.992 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:48.992 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:48.992 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:48.992 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:48.992 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:48.992 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:48.992 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:48.992 CC lib/ftl/utils/ftl_conf.o 00:08:48.992 CC lib/ftl/utils/ftl_mempool.o 00:08:48.992 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:48.992 CC lib/ftl/utils/ftl_bitmap.o 00:08:48.992 CC lib/ftl/utils/ftl_property.o 00:08:48.992 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:48.992 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:48.992 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:48.992 CC lib/ftl/utils/ftl_md.o 00:08:48.992 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:48.992 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:48.992 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:48.992 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:48.992 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:48.992 CC lib/ftl/base/ftl_base_dev.o 00:08:48.992 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:48.992 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:48.992 CC lib/ftl/base/ftl_base_bdev.o 00:08:48.992 CC lib/ftl/ftl_trace.o 00:08:48.992 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:49.253 LIB libspdk_nbd.a 00:08:49.516 SO libspdk_nbd.so.7.0 00:08:49.516 SYMLINK libspdk_nbd.so 00:08:49.516 LIB libspdk_ublk.a 00:08:49.779 SO libspdk_ublk.so.3.0 00:08:49.779 SYMLINK libspdk_ublk.so 00:08:49.779 LIB libspdk_scsi.a 00:08:49.779 SO libspdk_scsi.so.9.0 00:08:49.779 LIB libspdk_ftl.a 00:08:50.042 SYMLINK libspdk_scsi.so 00:08:50.042 SO libspdk_ftl.so.9.0 00:08:50.313 CC lib/iscsi/conn.o 00:08:50.313 CC lib/iscsi/init_grp.o 00:08:50.313 CC lib/iscsi/iscsi.o 00:08:50.313 CC lib/iscsi/md5.o 00:08:50.313 CC lib/iscsi/param.o 00:08:50.313 CC lib/iscsi/iscsi_subsystem.o 00:08:50.313 CC lib/iscsi/portal_grp.o 00:08:50.313 CC lib/iscsi/tgt_node.o 00:08:50.313 CC lib/iscsi/iscsi_rpc.o 00:08:50.313 CC lib/vhost/vhost.o 00:08:50.313 CC lib/iscsi/task.o 00:08:50.313 CC lib/vhost/vhost_rpc.o 00:08:50.313 CC lib/vhost/vhost_scsi.o 00:08:50.313 CC lib/vhost/vhost_blk.o 00:08:50.313 CC lib/vhost/rte_vhost_user.o 00:08:50.313 SYMLINK libspdk_ftl.so 00:08:50.898 LIB libspdk_nvmf.a 00:08:50.898 SO libspdk_nvmf.so.18.0 00:08:51.160 SYMLINK libspdk_nvmf.so 00:08:51.160 LIB libspdk_vhost.a 00:08:51.422 SO libspdk_vhost.so.8.0 00:08:51.422 SYMLINK libspdk_vhost.so 00:08:51.422 LIB libspdk_iscsi.a 00:08:51.683 SO libspdk_iscsi.so.8.0 00:08:51.683 SYMLINK libspdk_iscsi.so 00:08:52.257 CC module/vfu_device/vfu_virtio.o 00:08:52.257 CC module/vfu_device/vfu_virtio_blk.o 00:08:52.257 CC module/vfu_device/vfu_virtio_scsi.o 00:08:52.257 CC module/vfu_device/vfu_virtio_rpc.o 00:08:52.257 CC module/env_dpdk/env_dpdk_rpc.o 00:08:52.519 CC module/accel/error/accel_error.o 00:08:52.519 CC module/accel/error/accel_error_rpc.o 00:08:52.519 LIB libspdk_env_dpdk_rpc.a 00:08:52.519 CC module/blob/bdev/blob_bdev.o 00:08:52.519 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:52.519 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:52.519 CC module/accel/ioat/accel_ioat.o 00:08:52.519 CC module/accel/ioat/accel_ioat_rpc.o 00:08:52.519 CC module/keyring/file/keyring.o 00:08:52.519 CC module/sock/posix/posix.o 00:08:52.519 CC module/scheduler/gscheduler/gscheduler.o 00:08:52.519 CC module/keyring/file/keyring_rpc.o 00:08:52.519 CC module/accel/iaa/accel_iaa.o 00:08:52.519 CC module/accel/iaa/accel_iaa_rpc.o 00:08:52.519 CC module/accel/dsa/accel_dsa.o 00:08:52.519 CC module/accel/dsa/accel_dsa_rpc.o 00:08:52.519 SO libspdk_env_dpdk_rpc.so.6.0 00:08:52.519 SYMLINK libspdk_env_dpdk_rpc.so 00:08:52.782 LIB libspdk_scheduler_dpdk_governor.a 00:08:52.782 LIB libspdk_scheduler_gscheduler.a 00:08:52.782 LIB libspdk_keyring_file.a 00:08:52.782 LIB libspdk_accel_error.a 00:08:52.782 LIB libspdk_accel_ioat.a 00:08:52.782 LIB libspdk_scheduler_dynamic.a 00:08:52.782 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:52.782 SO libspdk_scheduler_gscheduler.so.4.0 00:08:52.782 SO libspdk_keyring_file.so.1.0 00:08:52.782 SO libspdk_scheduler_dynamic.so.4.0 00:08:52.782 SO libspdk_accel_error.so.2.0 00:08:52.782 LIB libspdk_accel_iaa.a 00:08:52.782 LIB libspdk_blob_bdev.a 00:08:52.782 SO libspdk_accel_ioat.so.6.0 00:08:52.782 LIB libspdk_accel_dsa.a 00:08:52.782 SYMLINK libspdk_scheduler_gscheduler.so 00:08:52.782 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:52.782 SO libspdk_accel_iaa.so.3.0 00:08:52.782 SO libspdk_blob_bdev.so.11.0 00:08:52.782 SYMLINK libspdk_keyring_file.so 00:08:52.782 SYMLINK libspdk_scheduler_dynamic.so 00:08:52.782 SYMLINK libspdk_accel_error.so 00:08:52.782 SO libspdk_accel_dsa.so.5.0 00:08:52.782 SYMLINK libspdk_accel_ioat.so 00:08:52.782 SYMLINK libspdk_accel_iaa.so 00:08:52.782 SYMLINK libspdk_blob_bdev.so 00:08:52.782 LIB libspdk_vfu_device.a 00:08:52.782 SYMLINK libspdk_accel_dsa.so 00:08:52.782 SO libspdk_vfu_device.so.3.0 00:08:53.045 SYMLINK libspdk_vfu_device.so 00:08:53.045 LIB libspdk_sock_posix.a 00:08:53.045 SO libspdk_sock_posix.so.6.0 00:08:53.045 SYMLINK libspdk_sock_posix.so 00:08:53.306 CC module/bdev/passthru/vbdev_passthru.o 00:08:53.306 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:53.306 CC module/bdev/gpt/gpt.o 00:08:53.306 CC module/bdev/lvol/vbdev_lvol.o 00:08:53.306 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:53.306 CC module/blobfs/bdev/blobfs_bdev.o 00:08:53.306 CC module/bdev/gpt/vbdev_gpt.o 00:08:53.306 CC module/bdev/iscsi/bdev_iscsi.o 00:08:53.306 CC module/bdev/aio/bdev_aio.o 00:08:53.306 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:53.306 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:53.306 CC module/bdev/delay/vbdev_delay.o 00:08:53.306 CC module/bdev/aio/bdev_aio_rpc.o 00:08:53.306 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:53.306 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:53.306 CC module/bdev/error/vbdev_error.o 00:08:53.306 CC module/bdev/null/bdev_null.o 00:08:53.306 CC module/bdev/malloc/bdev_malloc.o 00:08:53.306 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:53.306 CC module/bdev/nvme/bdev_nvme.o 00:08:53.306 CC module/bdev/raid/bdev_raid.o 00:08:53.306 CC module/bdev/error/vbdev_error_rpc.o 00:08:53.306 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:53.306 CC module/bdev/null/bdev_null_rpc.o 00:08:53.306 CC module/bdev/raid/bdev_raid_rpc.o 00:08:53.306 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:53.307 CC module/bdev/nvme/nvme_rpc.o 00:08:53.307 CC module/bdev/raid/raid0.o 00:08:53.307 CC module/bdev/split/vbdev_split.o 00:08:53.307 CC module/bdev/raid/bdev_raid_sb.o 00:08:53.307 CC module/bdev/nvme/bdev_mdns_client.o 00:08:53.307 CC module/bdev/raid/raid1.o 00:08:53.307 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:53.307 CC module/bdev/raid/concat.o 00:08:53.307 CC module/bdev/nvme/vbdev_opal.o 00:08:53.307 CC module/bdev/split/vbdev_split_rpc.o 00:08:53.307 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:53.307 CC module/bdev/ftl/bdev_ftl.o 00:08:53.307 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:53.307 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:53.307 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:53.307 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:53.882 LIB libspdk_bdev_null.a 00:08:53.882 LIB libspdk_bdev_error.a 00:08:53.882 LIB libspdk_blobfs_bdev.a 00:08:53.882 SO libspdk_bdev_null.so.6.0 00:08:53.882 SO libspdk_bdev_error.so.6.0 00:08:53.882 LIB libspdk_bdev_ftl.a 00:08:53.882 LIB libspdk_bdev_split.a 00:08:53.882 SO libspdk_blobfs_bdev.so.6.0 00:08:53.882 LIB libspdk_bdev_passthru.a 00:08:53.882 SYMLINK libspdk_bdev_null.so 00:08:53.882 LIB libspdk_bdev_gpt.a 00:08:53.882 LIB libspdk_bdev_iscsi.a 00:08:53.882 LIB libspdk_bdev_delay.a 00:08:53.882 SO libspdk_bdev_split.so.6.0 00:08:53.882 SO libspdk_bdev_ftl.so.6.0 00:08:53.882 LIB libspdk_bdev_malloc.a 00:08:53.882 SYMLINK libspdk_bdev_error.so 00:08:53.882 LIB libspdk_bdev_aio.a 00:08:53.882 SO libspdk_bdev_passthru.so.6.0 00:08:53.882 LIB libspdk_bdev_zone_block.a 00:08:53.882 SO libspdk_bdev_iscsi.so.6.0 00:08:53.882 SO libspdk_bdev_gpt.so.6.0 00:08:53.882 SO libspdk_bdev_delay.so.6.0 00:08:53.882 SYMLINK libspdk_blobfs_bdev.so 00:08:53.882 SO libspdk_bdev_malloc.so.6.0 00:08:53.882 SO libspdk_bdev_aio.so.6.0 00:08:53.882 SYMLINK libspdk_bdev_ftl.so 00:08:53.882 SYMLINK libspdk_bdev_split.so 00:08:53.882 SO libspdk_bdev_zone_block.so.6.0 00:08:53.882 SYMLINK libspdk_bdev_passthru.so 00:08:53.882 SYMLINK libspdk_bdev_iscsi.so 00:08:53.882 SYMLINK libspdk_bdev_delay.so 00:08:53.882 LIB libspdk_bdev_lvol.a 00:08:53.882 SYMLINK libspdk_bdev_gpt.so 00:08:53.882 SYMLINK libspdk_bdev_malloc.so 00:08:53.882 LIB libspdk_bdev_virtio.a 00:08:53.882 SYMLINK libspdk_bdev_aio.so 00:08:53.882 SYMLINK libspdk_bdev_zone_block.so 00:08:53.882 SO libspdk_bdev_lvol.so.6.0 00:08:53.882 SO libspdk_bdev_virtio.so.6.0 00:08:54.145 SYMLINK libspdk_bdev_lvol.so 00:08:54.145 SYMLINK libspdk_bdev_virtio.so 00:08:54.413 LIB libspdk_bdev_raid.a 00:08:54.413 SO libspdk_bdev_raid.so.6.0 00:08:54.677 SYMLINK libspdk_bdev_raid.so 00:08:55.252 LIB libspdk_bdev_nvme.a 00:08:55.514 SO libspdk_bdev_nvme.so.7.0 00:08:55.514 SYMLINK libspdk_bdev_nvme.so 00:08:56.461 CC module/event/subsystems/iobuf/iobuf.o 00:08:56.461 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:56.461 CC module/event/subsystems/sock/sock.o 00:08:56.461 CC module/event/subsystems/vmd/vmd.o 00:08:56.461 CC module/event/subsystems/scheduler/scheduler.o 00:08:56.461 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:56.461 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:56.461 CC module/event/subsystems/keyring/keyring.o 00:08:56.461 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:08:56.461 LIB libspdk_event_sock.a 00:08:56.462 LIB libspdk_event_keyring.a 00:08:56.462 LIB libspdk_event_iobuf.a 00:08:56.462 LIB libspdk_event_vhost_blk.a 00:08:56.462 LIB libspdk_event_scheduler.a 00:08:56.462 LIB libspdk_event_vmd.a 00:08:56.462 LIB libspdk_event_vfu_tgt.a 00:08:56.462 SO libspdk_event_sock.so.5.0 00:08:56.462 SO libspdk_event_iobuf.so.3.0 00:08:56.462 SO libspdk_event_vhost_blk.so.3.0 00:08:56.462 SO libspdk_event_keyring.so.1.0 00:08:56.462 SO libspdk_event_scheduler.so.4.0 00:08:56.462 SO libspdk_event_vfu_tgt.so.3.0 00:08:56.462 SO libspdk_event_vmd.so.6.0 00:08:56.462 SYMLINK libspdk_event_iobuf.so 00:08:56.462 SYMLINK libspdk_event_sock.so 00:08:56.462 SYMLINK libspdk_event_keyring.so 00:08:56.462 SYMLINK libspdk_event_vhost_blk.so 00:08:56.462 SYMLINK libspdk_event_scheduler.so 00:08:56.462 SYMLINK libspdk_event_vfu_tgt.so 00:08:56.462 SYMLINK libspdk_event_vmd.so 00:08:57.036 CC module/event/subsystems/accel/accel.o 00:08:57.036 LIB libspdk_event_accel.a 00:08:57.036 SO libspdk_event_accel.so.6.0 00:08:57.036 SYMLINK libspdk_event_accel.so 00:08:57.611 CC module/event/subsystems/bdev/bdev.o 00:08:57.611 LIB libspdk_event_bdev.a 00:08:57.611 SO libspdk_event_bdev.so.6.0 00:08:57.874 SYMLINK libspdk_event_bdev.so 00:08:58.137 CC module/event/subsystems/scsi/scsi.o 00:08:58.137 CC module/event/subsystems/nbd/nbd.o 00:08:58.137 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:58.137 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:58.137 CC module/event/subsystems/ublk/ublk.o 00:08:58.399 LIB libspdk_event_nbd.a 00:08:58.399 LIB libspdk_event_ublk.a 00:08:58.399 LIB libspdk_event_scsi.a 00:08:58.399 SO libspdk_event_nbd.so.6.0 00:08:58.399 SO libspdk_event_ublk.so.3.0 00:08:58.399 SO libspdk_event_scsi.so.6.0 00:08:58.399 LIB libspdk_event_nvmf.a 00:08:58.399 SYMLINK libspdk_event_nbd.so 00:08:58.399 SYMLINK libspdk_event_ublk.so 00:08:58.399 SO libspdk_event_nvmf.so.6.0 00:08:58.399 SYMLINK libspdk_event_scsi.so 00:08:58.399 SYMLINK libspdk_event_nvmf.so 00:08:58.662 CC module/event/subsystems/iscsi/iscsi.o 00:08:58.662 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:58.925 LIB libspdk_event_vhost_scsi.a 00:08:58.925 LIB libspdk_event_iscsi.a 00:08:58.925 SO libspdk_event_vhost_scsi.so.3.0 00:08:58.925 SO libspdk_event_iscsi.so.6.0 00:08:58.925 SYMLINK libspdk_event_vhost_scsi.so 00:08:59.187 SYMLINK libspdk_event_iscsi.so 00:08:59.187 SO libspdk.so.6.0 00:08:59.188 SYMLINK libspdk.so 00:08:59.760 CC app/trace_record/trace_record.o 00:08:59.760 TEST_HEADER include/spdk/accel_module.h 00:08:59.760 TEST_HEADER include/spdk/accel.h 00:08:59.760 CC app/spdk_nvme_discover/discovery_aer.o 00:08:59.760 CC app/spdk_nvme_perf/perf.o 00:08:59.760 TEST_HEADER include/spdk/base64.h 00:08:59.760 TEST_HEADER include/spdk/barrier.h 00:08:59.760 TEST_HEADER include/spdk/bdev.h 00:08:59.760 CC app/spdk_nvme_identify/identify.o 00:08:59.760 TEST_HEADER include/spdk/assert.h 00:08:59.760 TEST_HEADER include/spdk/bdev_module.h 00:08:59.760 TEST_HEADER include/spdk/bdev_zone.h 00:08:59.760 TEST_HEADER include/spdk/bit_pool.h 00:08:59.760 TEST_HEADER include/spdk/bit_array.h 00:08:59.760 TEST_HEADER include/spdk/blob_bdev.h 00:08:59.760 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:59.760 TEST_HEADER include/spdk/blobfs.h 00:08:59.760 TEST_HEADER include/spdk/conf.h 00:08:59.760 TEST_HEADER include/spdk/blob.h 00:08:59.760 TEST_HEADER include/spdk/config.h 00:08:59.760 TEST_HEADER include/spdk/cpuset.h 00:08:59.760 CC app/iscsi_tgt/iscsi_tgt.o 00:08:59.760 TEST_HEADER include/spdk/crc16.h 00:08:59.760 TEST_HEADER include/spdk/dma.h 00:08:59.760 TEST_HEADER include/spdk/crc32.h 00:08:59.760 TEST_HEADER include/spdk/crc64.h 00:08:59.760 TEST_HEADER include/spdk/env_dpdk.h 00:08:59.760 CC app/spdk_top/spdk_top.o 00:08:59.760 TEST_HEADER include/spdk/env.h 00:08:59.760 TEST_HEADER include/spdk/dif.h 00:08:59.760 TEST_HEADER include/spdk/event.h 00:08:59.760 TEST_HEADER include/spdk/endian.h 00:08:59.760 CXX app/trace/trace.o 00:08:59.760 CC app/vhost/vhost.o 00:08:59.760 TEST_HEADER include/spdk/fd_group.h 00:08:59.760 CC app/spdk_lspci/spdk_lspci.o 00:08:59.760 TEST_HEADER include/spdk/fd.h 00:08:59.760 TEST_HEADER include/spdk/ftl.h 00:08:59.760 CC app/spdk_dd/spdk_dd.o 00:08:59.760 TEST_HEADER include/spdk/gpt_spec.h 00:08:59.760 TEST_HEADER include/spdk/hexlify.h 00:08:59.760 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:59.760 TEST_HEADER include/spdk/histogram_data.h 00:08:59.760 TEST_HEADER include/spdk/file.h 00:08:59.760 TEST_HEADER include/spdk/idxd.h 00:08:59.760 TEST_HEADER include/spdk/init.h 00:08:59.760 TEST_HEADER include/spdk/ioat.h 00:08:59.760 TEST_HEADER include/spdk/ioat_spec.h 00:08:59.760 TEST_HEADER include/spdk/iscsi_spec.h 00:08:59.760 TEST_HEADER include/spdk/idxd_spec.h 00:08:59.760 TEST_HEADER include/spdk/json.h 00:08:59.760 CC test/rpc_client/rpc_client_test.o 00:08:59.760 TEST_HEADER include/spdk/keyring.h 00:08:59.760 TEST_HEADER include/spdk/jsonrpc.h 00:08:59.760 CC app/spdk_tgt/spdk_tgt.o 00:08:59.760 TEST_HEADER include/spdk/likely.h 00:08:59.760 TEST_HEADER include/spdk/log.h 00:08:59.760 TEST_HEADER include/spdk/lvol.h 00:08:59.760 TEST_HEADER include/spdk/keyring_module.h 00:08:59.760 TEST_HEADER include/spdk/memory.h 00:09:00.020 TEST_HEADER include/spdk/nbd.h 00:09:00.020 CC test/app/histogram_perf/histogram_perf.o 00:09:00.020 TEST_HEADER include/spdk/nvme.h 00:09:00.020 TEST_HEADER include/spdk/mmio.h 00:09:00.020 TEST_HEADER include/spdk/nvme_ocssd.h 00:09:00.020 TEST_HEADER include/spdk/notify.h 00:09:00.020 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:09:00.020 TEST_HEADER include/spdk/nvme_intel.h 00:09:00.020 CC app/nvmf_tgt/nvmf_main.o 00:09:00.020 TEST_HEADER include/spdk/nvme_spec.h 00:09:00.020 CC test/nvme/cuse/cuse.o 00:09:00.020 TEST_HEADER include/spdk/nvme_zns.h 00:09:00.020 TEST_HEADER include/spdk/nvmf_cmd.h 00:09:00.020 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:09:00.020 TEST_HEADER include/spdk/nvmf.h 00:09:00.020 CC examples/vmd/led/led.o 00:09:00.021 TEST_HEADER include/spdk/nvmf_spec.h 00:09:00.021 TEST_HEADER include/spdk/nvmf_transport.h 00:09:00.021 CC test/event/reactor_perf/reactor_perf.o 00:09:00.021 TEST_HEADER include/spdk/opal_spec.h 00:09:00.021 CC test/nvme/reserve/reserve.o 00:09:00.021 CC test/dma/test_dma/test_dma.o 00:09:00.021 CC examples/nvme/cmb_copy/cmb_copy.o 00:09:00.021 CC examples/nvme/nvme_manage/nvme_manage.o 00:09:00.021 TEST_HEADER include/spdk/pipe.h 00:09:00.021 TEST_HEADER include/spdk/opal.h 00:09:00.021 CC test/nvme/fused_ordering/fused_ordering.o 00:09:00.021 CC test/nvme/aer/aer.o 00:09:00.021 CC test/nvme/err_injection/err_injection.o 00:09:00.021 CC test/nvme/simple_copy/simple_copy.o 00:09:00.021 TEST_HEADER include/spdk/rpc.h 00:09:00.021 TEST_HEADER include/spdk/pci_ids.h 00:09:00.021 TEST_HEADER include/spdk/queue.h 00:09:00.021 TEST_HEADER include/spdk/scsi.h 00:09:00.021 CC test/nvme/e2edp/nvme_dp.o 00:09:00.021 TEST_HEADER include/spdk/reduce.h 00:09:00.021 CC examples/bdev/bdevperf/bdevperf.o 00:09:00.021 CC test/bdev/bdevio/bdevio.o 00:09:00.021 CC test/nvme/compliance/nvme_compliance.o 00:09:00.021 TEST_HEADER include/spdk/sock.h 00:09:00.021 CC test/app/jsoncat/jsoncat.o 00:09:00.021 TEST_HEADER include/spdk/string.h 00:09:00.021 TEST_HEADER include/spdk/scheduler.h 00:09:00.021 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:09:00.021 CC examples/nvme/arbitration/arbitration.o 00:09:00.021 TEST_HEADER include/spdk/trace.h 00:09:00.021 CC examples/thread/thread/thread_ex.o 00:09:00.021 TEST_HEADER include/spdk/scsi_spec.h 00:09:00.021 TEST_HEADER include/spdk/trace_parser.h 00:09:00.021 CC examples/vmd/lsvmd/lsvmd.o 00:09:00.021 CC test/env/vtophys/vtophys.o 00:09:00.021 TEST_HEADER include/spdk/tree.h 00:09:00.021 TEST_HEADER include/spdk/stdinc.h 00:09:00.021 CC examples/sock/hello_world/hello_sock.o 00:09:00.021 TEST_HEADER include/spdk/ublk.h 00:09:00.021 CC test/nvme/sgl/sgl.o 00:09:00.021 CC test/nvme/connect_stress/connect_stress.o 00:09:00.021 TEST_HEADER include/spdk/thread.h 00:09:00.021 CC test/thread/poller_perf/poller_perf.o 00:09:00.021 CC examples/ioat/verify/verify.o 00:09:00.021 TEST_HEADER include/spdk/uuid.h 00:09:00.021 CC test/event/scheduler/scheduler.o 00:09:00.021 CC examples/blob/cli/blobcli.o 00:09:00.021 LINK spdk_trace_record 00:09:00.021 CC test/nvme/doorbell_aers/doorbell_aers.o 00:09:00.021 CC test/event/event_perf/event_perf.o 00:09:00.021 LINK spdk_nvme_discover 00:09:00.021 TEST_HEADER include/spdk/vfio_user_pci.h 00:09:00.021 LINK spdk_lspci 00:09:00.021 TEST_HEADER include/spdk/util.h 00:09:00.021 TEST_HEADER include/spdk/vfio_user_spec.h 00:09:00.021 CC examples/nvme/reconnect/reconnect.o 00:09:00.021 TEST_HEADER include/spdk/version.h 00:09:00.021 CC test/env/pci/pci_ut.o 00:09:00.021 CC test/app/stub/stub.o 00:09:00.021 CC test/nvme/overhead/overhead.o 00:09:00.021 LINK vhost 00:09:00.021 CC test/nvme/reset/reset.o 00:09:00.021 TEST_HEADER include/spdk/vmd.h 00:09:00.021 CC app/fio/nvme/fio_plugin.o 00:09:00.021 CC test/app/bdev_svc/bdev_svc.o 00:09:00.021 TEST_HEADER include/spdk/xor.h 00:09:00.021 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:09:00.021 CC test/nvme/boot_partition/boot_partition.o 00:09:00.021 TEST_HEADER include/spdk/vhost.h 00:09:00.021 CXX test/cpp_headers/accel.o 00:09:00.309 CC test/event/reactor/reactor.o 00:09:00.309 CC app/fio/bdev/fio_plugin.o 00:09:00.309 LINK interrupt_tgt 00:09:00.309 TEST_HEADER include/spdk/zipf.h 00:09:00.309 CC examples/nvme/hello_world/hello_world.o 00:09:00.309 CC examples/util/zipf/zipf.o 00:09:00.309 CXX test/cpp_headers/assert.o 00:09:00.309 CC test/env/memory/memory_ut.o 00:09:00.309 CC examples/idxd/perf/perf.o 00:09:00.309 CC test/nvme/fdp/fdp.o 00:09:00.309 CXX test/cpp_headers/bdev_module.o 00:09:00.309 CXX test/cpp_headers/accel_module.o 00:09:00.309 CC test/nvme/startup/startup.o 00:09:00.309 CXX test/cpp_headers/bdev.o 00:09:00.309 LINK led 00:09:00.309 CXX test/cpp_headers/barrier.o 00:09:00.309 CXX test/cpp_headers/base64.o 00:09:00.309 LINK histogram_perf 00:09:00.309 LINK reactor_perf 00:09:00.309 CXX test/cpp_headers/bit_array.o 00:09:00.309 CXX test/cpp_headers/bdev_zone.o 00:09:00.309 CXX test/cpp_headers/blob_bdev.o 00:09:00.309 CXX test/cpp_headers/bit_pool.o 00:09:00.309 CXX test/cpp_headers/blobfs.o 00:09:00.309 CC test/accel/dif/dif.o 00:09:00.309 CXX test/cpp_headers/blob.o 00:09:00.309 CXX test/cpp_headers/blobfs_bdev.o 00:09:00.309 CXX test/cpp_headers/conf.o 00:09:00.309 CXX test/cpp_headers/config.o 00:09:00.309 CC examples/ioat/perf/perf.o 00:09:00.309 CXX test/cpp_headers/crc16.o 00:09:00.309 CXX test/cpp_headers/cpuset.o 00:09:00.309 CXX test/cpp_headers/dma.o 00:09:00.309 CC test/event/app_repeat/app_repeat.o 00:09:00.309 CC test/blobfs/mkfs/mkfs.o 00:09:00.309 LINK spdk_tgt 00:09:00.309 CXX test/cpp_headers/dif.o 00:09:00.309 CXX test/cpp_headers/env_dpdk.o 00:09:00.309 CXX test/cpp_headers/crc32.o 00:09:00.309 LINK rpc_client_test 00:09:00.309 CXX test/cpp_headers/crc64.o 00:09:00.309 CXX test/cpp_headers/endian.o 00:09:00.309 CXX test/cpp_headers/event.o 00:09:00.309 CXX test/cpp_headers/env.o 00:09:00.309 CXX test/cpp_headers/fd_group.o 00:09:00.309 CXX test/cpp_headers/hexlify.o 00:09:00.309 CXX test/cpp_headers/fd.o 00:09:00.309 LINK jsoncat 00:09:00.309 CXX test/cpp_headers/idxd.o 00:09:00.309 CXX test/cpp_headers/file.o 00:09:00.309 CXX test/cpp_headers/ftl.o 00:09:00.309 CXX test/cpp_headers/init.o 00:09:00.309 LINK pmr_persistence 00:09:00.309 CXX test/cpp_headers/iscsi_spec.o 00:09:00.309 CXX test/cpp_headers/gpt_spec.o 00:09:00.309 CXX test/cpp_headers/jsonrpc.o 00:09:00.309 CXX test/cpp_headers/keyring_module.o 00:09:00.309 CXX test/cpp_headers/keyring.o 00:09:00.309 CC examples/nvme/abort/abort.o 00:09:00.309 LINK cmb_copy 00:09:00.309 CXX test/cpp_headers/likely.o 00:09:00.309 CXX test/cpp_headers/log.o 00:09:00.309 LINK reserve 00:09:00.309 CXX test/cpp_headers/lvol.o 00:09:00.309 CXX test/cpp_headers/histogram_data.o 00:09:00.309 CXX test/cpp_headers/mmio.o 00:09:00.309 CXX test/cpp_headers/idxd_spec.o 00:09:00.309 CC examples/nvme/hotplug/hotplug.o 00:09:00.309 CXX test/cpp_headers/ioat.o 00:09:00.309 CXX test/cpp_headers/notify.o 00:09:00.309 CXX test/cpp_headers/nvme.o 00:09:00.309 CXX test/cpp_headers/nvme_intel.o 00:09:00.309 CXX test/cpp_headers/json.o 00:09:00.309 CC examples/nvmf/nvmf/nvmf.o 00:09:00.309 CXX test/cpp_headers/ioat_spec.o 00:09:00.309 LINK event_perf 00:09:00.309 CC test/env/mem_callbacks/mem_callbacks.o 00:09:00.309 CXX test/cpp_headers/nvme_spec.o 00:09:00.309 CXX test/cpp_headers/nvme_zns.o 00:09:00.580 CXX test/cpp_headers/nbd.o 00:09:00.580 CXX test/cpp_headers/nvmf_fc_spec.o 00:09:00.580 CXX test/cpp_headers/nvmf.o 00:09:00.580 CXX test/cpp_headers/nvmf_spec.o 00:09:00.580 CXX test/cpp_headers/memory.o 00:09:00.580 CXX test/cpp_headers/nvmf_transport.o 00:09:00.580 CC examples/bdev/hello_world/hello_bdev.o 00:09:00.580 CXX test/cpp_headers/nvme_ocssd.o 00:09:00.580 CXX test/cpp_headers/nvmf_cmd.o 00:09:00.580 CXX test/cpp_headers/opal_spec.o 00:09:00.580 CXX test/cpp_headers/pci_ids.o 00:09:00.580 CXX test/cpp_headers/nvme_ocssd_spec.o 00:09:00.580 CXX test/cpp_headers/queue.o 00:09:00.580 CXX test/cpp_headers/reduce.o 00:09:00.580 LINK env_dpdk_post_init 00:09:00.580 LINK spdk_dd 00:09:00.580 CXX test/cpp_headers/opal.o 00:09:00.580 LINK scheduler 00:09:00.580 CXX test/cpp_headers/pipe.o 00:09:00.580 LINK nvmf_tgt 00:09:00.580 LINK vtophys 00:09:00.580 CC examples/accel/perf/accel_perf.o 00:09:00.580 CXX test/cpp_headers/rpc.o 00:09:00.580 LINK doorbell_aers 00:09:00.580 LINK boot_partition 00:09:00.580 CXX test/cpp_headers/scheduler.o 00:09:00.580 CXX test/cpp_headers/scsi_spec.o 00:09:00.580 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:09:00.580 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:09:00.580 CXX test/cpp_headers/scsi.o 00:09:00.580 LINK bdev_svc 00:09:00.847 LINK zipf 00:09:00.847 LINK test_dma 00:09:00.847 LINK hello_sock 00:09:00.847 LINK reset 00:09:00.847 LINK connect_stress 00:09:00.847 LINK thread 00:09:00.847 LINK bdevio 00:09:00.847 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:09:00.847 LINK nvme_dp 00:09:00.847 CXX test/cpp_headers/sock.o 00:09:00.847 CXX test/cpp_headers/stdinc.o 00:09:00.847 CXX test/cpp_headers/string.o 00:09:00.847 CXX test/cpp_headers/thread.o 00:09:00.847 CXX test/cpp_headers/trace.o 00:09:01.109 CXX test/cpp_headers/tree.o 00:09:01.109 CC test/lvol/esnap/esnap.o 00:09:01.109 CXX test/cpp_headers/ublk.o 00:09:01.109 CXX test/cpp_headers/trace_parser.o 00:09:01.109 CXX test/cpp_headers/util.o 00:09:01.109 LINK aer 00:09:01.109 CC examples/blob/hello_world/hello_blob.o 00:09:01.109 LINK mkfs 00:09:01.109 CXX test/cpp_headers/version.o 00:09:01.109 CXX test/cpp_headers/uuid.o 00:09:01.109 LINK reconnect 00:09:01.109 CXX test/cpp_headers/vfio_user_pci.o 00:09:01.109 CXX test/cpp_headers/vfio_user_spec.o 00:09:01.109 CXX test/cpp_headers/vhost.o 00:09:01.109 LINK nvme_manage 00:09:01.109 LINK verify 00:09:01.109 LINK blobcli 00:09:01.109 CXX test/cpp_headers/xor.o 00:09:01.109 LINK pci_ut 00:09:01.110 CXX test/cpp_headers/zipf.o 00:09:01.110 LINK hotplug 00:09:01.110 CXX test/cpp_headers/vmd.o 00:09:01.110 LINK dif 00:09:01.370 LINK idxd_perf 00:09:01.370 LINK fdp 00:09:01.370 LINK sgl 00:09:01.370 LINK arbitration 00:09:01.370 LINK hello_bdev 00:09:01.370 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:09:01.370 LINK spdk_nvme_identify 00:09:01.370 LINK abort 00:09:01.370 LINK spdk_trace 00:09:01.632 LINK bdevperf 00:09:01.632 LINK hello_blob 00:09:01.632 LINK nvme_compliance 00:09:01.632 LINK spdk_bdev 00:09:01.632 LINK spdk_nvme 00:09:01.632 LINK iscsi_tgt 00:09:01.632 LINK lsvmd 00:09:01.632 LINK vhost_fuzz 00:09:01.632 LINK cuse 00:09:01.893 LINK simple_copy 00:09:01.893 LINK hello_world 00:09:01.893 LINK fused_ordering 00:09:01.893 LINK poller_perf 00:09:01.893 LINK memory_ut 00:09:01.893 LINK reactor 00:09:01.893 LINK err_injection 00:09:01.893 LINK app_repeat 00:09:01.893 LINK nvme_fuzz 00:09:01.893 LINK startup 00:09:01.893 LINK stub 00:09:02.156 LINK ioat_perf 00:09:02.156 LINK overhead 00:09:02.156 LINK nvmf 00:09:02.156 LINK accel_perf 00:09:02.421 LINK mem_callbacks 00:09:02.421 LINK spdk_nvme_perf 00:09:02.421 LINK spdk_top 00:09:02.421 LINK iscsi_fuzz 00:09:05.730 LINK esnap 00:09:06.305 00:09:06.305 real 0m52.675s 00:09:06.305 user 6m57.942s 00:09:06.305 sys 6m13.516s 00:09:06.305 09:21:59 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:09:06.305 09:21:59 make -- common/autotest_common.sh@10 -- $ set +x 00:09:06.305 ************************************ 00:09:06.305 END TEST make 00:09:06.305 ************************************ 00:09:06.305 09:21:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:09:06.305 09:21:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:06.305 09:21:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:06.305 09:21:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:06.305 09:21:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:09:06.305 09:21:59 -- pm/common@44 -- $ pid=5548 00:09:06.305 09:21:59 -- pm/common@50 -- $ kill -TERM 5548 00:09:06.305 09:21:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:06.305 09:21:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:09:06.305 09:21:59 -- pm/common@44 -- $ pid=5549 00:09:06.305 09:21:59 -- pm/common@50 -- $ kill -TERM 5549 00:09:06.305 09:21:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:06.305 09:21:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:09:06.305 09:21:59 -- pm/common@44 -- $ pid=5551 00:09:06.305 09:21:59 -- pm/common@50 -- $ kill -TERM 5551 00:09:06.305 09:21:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:06.305 09:21:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:09:06.305 09:21:59 -- pm/common@44 -- $ pid=5579 00:09:06.305 09:21:59 -- pm/common@50 -- $ sudo -E kill -TERM 5579 00:09:06.305 09:21:59 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.305 09:21:59 -- nvmf/common.sh@7 -- # uname -s 00:09:06.305 09:21:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.305 09:21:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.305 09:21:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.305 09:21:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.305 09:21:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.305 09:21:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.305 09:21:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.305 09:21:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.305 09:21:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.305 09:21:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.305 09:21:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:09:06.305 09:21:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:09:06.305 09:21:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.305 09:21:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.305 09:21:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.305 09:21:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.305 09:21:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.568 09:21:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.568 09:21:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.568 09:21:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.568 09:21:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.568 09:21:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.568 09:21:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.568 09:21:59 -- paths/export.sh@5 -- # export PATH 00:09:06.568 09:21:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.568 09:21:59 -- nvmf/common.sh@47 -- # : 0 00:09:06.568 09:21:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:06.568 09:21:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:06.568 09:21:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.568 09:21:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.568 09:21:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.568 09:21:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:06.568 09:21:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:06.568 09:21:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:06.568 09:21:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:09:06.568 09:21:59 -- spdk/autotest.sh@32 -- # uname -s 00:09:06.568 09:21:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:09:06.568 09:21:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:09:06.568 09:21:59 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:09:06.568 09:21:59 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:09:06.568 09:21:59 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:09:06.568 09:21:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:09:06.568 09:21:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:09:06.568 09:21:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:09:06.568 09:21:59 -- spdk/autotest.sh@48 -- # udevadm_pid=70658 00:09:06.568 09:21:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:09:06.568 09:21:59 -- pm/common@17 -- # local monitor 00:09:06.568 09:21:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:09:06.568 09:21:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:06.568 09:21:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:06.568 09:21:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:06.568 09:21:59 -- pm/common@21 -- # date +%s 00:09:06.568 09:21:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:06.568 09:21:59 -- pm/common@25 -- # sleep 1 00:09:06.568 09:21:59 -- pm/common@21 -- # date +%s 00:09:06.568 09:21:59 -- pm/common@21 -- # date +%s 00:09:06.568 09:21:59 -- pm/common@21 -- # date +%s 00:09:06.568 09:21:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715844119 00:09:06.568 09:21:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715844119 00:09:06.568 09:21:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715844119 00:09:06.568 09:21:59 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715844119 00:09:06.568 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715844119_collect-vmstat.pm.log 00:09:06.568 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715844119_collect-cpu-load.pm.log 00:09:06.568 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715844119_collect-cpu-temp.pm.log 00:09:06.568 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715844119_collect-bmc-pm.bmc.pm.log 00:09:07.516 09:22:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:09:07.516 09:22:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:09:07.516 09:22:00 -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:07.516 09:22:00 -- common/autotest_common.sh@10 -- # set +x 00:09:07.516 09:22:00 -- spdk/autotest.sh@59 -- # create_test_list 00:09:07.516 09:22:00 -- common/autotest_common.sh@744 -- # xtrace_disable 00:09:07.516 09:22:00 -- common/autotest_common.sh@10 -- # set +x 00:09:07.516 09:22:01 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:09:07.516 09:22:01 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:07.516 09:22:01 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:07.516 09:22:01 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:09:07.516 09:22:01 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:07.516 09:22:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:09:07.516 09:22:01 -- common/autotest_common.sh@1451 -- # uname 00:09:07.516 09:22:01 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:09:07.516 09:22:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:09:07.516 09:22:01 -- common/autotest_common.sh@1471 -- # uname 00:09:07.516 09:22:01 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:09:07.516 09:22:01 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:09:07.516 09:22:01 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:09:07.516 09:22:01 -- spdk/autotest.sh@72 -- # hash lcov 00:09:07.516 09:22:01 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:09:07.516 09:22:01 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:09:07.516 --rc lcov_branch_coverage=1 00:09:07.516 --rc lcov_function_coverage=1 00:09:07.516 --rc genhtml_branch_coverage=1 00:09:07.516 --rc genhtml_function_coverage=1 00:09:07.516 --rc genhtml_legend=1 00:09:07.516 --rc geninfo_all_blocks=1 00:09:07.516 ' 00:09:07.516 09:22:01 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:09:07.516 --rc lcov_branch_coverage=1 00:09:07.516 --rc lcov_function_coverage=1 00:09:07.516 --rc genhtml_branch_coverage=1 00:09:07.516 --rc genhtml_function_coverage=1 00:09:07.516 --rc genhtml_legend=1 00:09:07.516 --rc geninfo_all_blocks=1 00:09:07.516 ' 00:09:07.516 09:22:01 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:09:07.516 --rc lcov_branch_coverage=1 00:09:07.516 --rc lcov_function_coverage=1 00:09:07.516 --rc genhtml_branch_coverage=1 00:09:07.516 --rc genhtml_function_coverage=1 00:09:07.516 --rc genhtml_legend=1 00:09:07.516 --rc geninfo_all_blocks=1 00:09:07.516 --no-external' 00:09:07.516 09:22:01 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:09:07.516 --rc lcov_branch_coverage=1 00:09:07.516 --rc lcov_function_coverage=1 00:09:07.516 --rc genhtml_branch_coverage=1 00:09:07.516 --rc genhtml_function_coverage=1 00:09:07.516 --rc genhtml_legend=1 00:09:07.516 --rc geninfo_all_blocks=1 00:09:07.516 --no-external' 00:09:07.516 09:22:01 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:09:07.778 lcov: LCOV version 1.14 00:09:07.778 09:22:01 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:09:20.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:09:20.025 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:09:20.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:09:20.025 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:09:20.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:09:20.026 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:09:20.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:09:20.026 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:09:34.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:09:34.952 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:09:34.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:09:34.952 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:09:34.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:09:34.952 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:09:34.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:09:34.952 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:09:34.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:09:34.952 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:09:34.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:09:34.952 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:09:34.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:09:34.952 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:09:34.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:09:34.952 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:09:34.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:09:34.952 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:09:34.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:09:34.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:09:34.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:09:34.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:09:34.954 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:09:36.343 09:22:29 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:09:36.343 09:22:29 -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:36.343 09:22:29 -- common/autotest_common.sh@10 -- # set +x 00:09:36.343 09:22:29 -- spdk/autotest.sh@91 -- # rm -f 00:09:36.343 09:22:29 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:09:39.651 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:09:39.651 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:09:39.651 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:09:39.651 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:09:39.651 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:09:39.651 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:09:39.913 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:09:39.913 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:09:39.913 0000:65:00.0 (144d a80a): Already using the nvme driver 00:09:39.913 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:09:39.913 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:09:39.913 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:09:39.913 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:09:39.913 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:09:39.913 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:09:39.913 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:09:39.913 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:09:40.175 09:22:33 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:09:40.175 09:22:33 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:09:40.175 09:22:33 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:09:40.175 09:22:33 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:09:40.175 09:22:33 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:40.175 09:22:33 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:09:40.175 09:22:33 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:09:40.175 09:22:33 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:40.175 09:22:33 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:40.175 09:22:33 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:09:40.175 09:22:33 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:09:40.175 09:22:33 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:09:40.175 09:22:33 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:09:40.175 09:22:33 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:09:40.175 09:22:33 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:40.436 No valid GPT data, bailing 00:09:40.436 09:22:33 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:40.436 09:22:33 -- scripts/common.sh@391 -- # pt= 00:09:40.436 09:22:33 -- scripts/common.sh@392 -- # return 1 00:09:40.436 09:22:33 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:40.436 1+0 records in 00:09:40.436 1+0 records out 00:09:40.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457155 s, 229 MB/s 00:09:40.436 09:22:33 -- spdk/autotest.sh@118 -- # sync 00:09:40.436 09:22:33 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:40.436 09:22:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:40.436 09:22:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:50.462 09:22:42 -- spdk/autotest.sh@124 -- # uname -s 00:09:50.462 09:22:42 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:09:50.462 09:22:42 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:09:50.462 09:22:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:50.462 09:22:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:50.462 09:22:42 -- common/autotest_common.sh@10 -- # set +x 00:09:50.462 ************************************ 00:09:50.462 START TEST setup.sh 00:09:50.462 ************************************ 00:09:50.462 09:22:42 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:09:50.462 * Looking for test storage... 00:09:50.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:09:50.462 09:22:42 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:09:50.462 09:22:42 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:09:50.462 09:22:42 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:09:50.462 09:22:42 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:50.462 09:22:42 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:50.462 09:22:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:09:50.462 ************************************ 00:09:50.462 START TEST acl 00:09:50.462 ************************************ 00:09:50.462 09:22:42 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:09:50.462 * Looking for test storage... 00:09:50.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:09:50.462 09:22:42 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:09:50.462 09:22:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:09:50.462 09:22:42 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:09:50.462 09:22:42 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:09:50.462 09:22:42 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:09:50.462 09:22:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:09:50.462 09:22:42 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:09:50.462 09:22:42 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:50.463 09:22:42 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:09:50.463 09:22:42 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:09:50.463 09:22:42 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:09:50.463 09:22:42 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:09:50.463 09:22:42 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:09:50.463 09:22:42 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:09:50.463 09:22:42 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:09:50.463 09:22:42 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:09:53.772 09:22:46 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:09:53.772 09:22:46 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:09:53.772 09:22:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:53.772 09:22:46 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:09:53.772 09:22:46 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:09:53.772 09:22:46 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:09:57.096 Hugepages 00:09:57.096 node hugesize free / total 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.096 00:09:57.096 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:57.096 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:09:57.097 09:22:50 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:09:57.097 09:22:50 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:57.097 09:22:50 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:57.097 09:22:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:09:57.097 ************************************ 00:09:57.097 START TEST denied 00:09:57.097 ************************************ 00:09:57.097 09:22:50 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:09:57.097 09:22:50 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:09:57.097 09:22:50 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:09:57.097 09:22:50 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:09:57.097 09:22:50 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:09:57.097 09:22:50 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:10:01.314 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:10:01.314 09:22:54 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:10:01.314 09:22:54 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:10:01.314 09:22:54 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:10:01.314 09:22:54 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:10:01.314 09:22:54 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:10:01.314 09:22:54 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:10:01.314 09:22:54 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:10:01.314 09:22:54 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:10:01.314 09:22:54 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:01.314 09:22:54 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:10:06.608 00:10:06.608 real 0m8.946s 00:10:06.608 user 0m3.016s 00:10:06.608 sys 0m5.121s 00:10:06.608 09:22:59 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:06.608 09:22:59 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:10:06.608 ************************************ 00:10:06.608 END TEST denied 00:10:06.608 ************************************ 00:10:06.608 09:22:59 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:10:06.608 09:22:59 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:06.608 09:22:59 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:06.608 09:22:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:10:06.608 ************************************ 00:10:06.608 START TEST allowed 00:10:06.608 ************************************ 00:10:06.608 09:22:59 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:10:06.608 09:22:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:10:06.608 09:22:59 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:10:06.608 09:22:59 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:10:06.608 09:22:59 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:10:06.608 09:22:59 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:10:11.923 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:10:11.923 09:23:05 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:10:11.923 09:23:05 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:10:11.923 09:23:05 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:10:11.923 09:23:05 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:11.923 09:23:05 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:10:16.145 00:10:16.145 real 0m9.671s 00:10:16.145 user 0m2.865s 00:10:16.145 sys 0m5.074s 00:10:16.145 09:23:09 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:16.145 09:23:09 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:10:16.145 ************************************ 00:10:16.145 END TEST allowed 00:10:16.145 ************************************ 00:10:16.145 00:10:16.145 real 0m26.488s 00:10:16.145 user 0m8.800s 00:10:16.145 sys 0m15.346s 00:10:16.145 09:23:09 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:16.145 09:23:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:10:16.145 ************************************ 00:10:16.145 END TEST acl 00:10:16.145 ************************************ 00:10:16.145 09:23:09 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:10:16.145 09:23:09 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:16.145 09:23:09 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:16.145 09:23:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:10:16.145 ************************************ 00:10:16.145 START TEST hugepages 00:10:16.145 ************************************ 00:10:16.145 09:23:09 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:10:16.145 * Looking for test storage... 00:10:16.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109069076 kB' 'MemAvailable: 112308492 kB' 'Buffers: 11428 kB' 'Cached: 8822600 kB' 'SwapCached: 0 kB' 'Active: 6157440 kB' 'Inactive: 3403736 kB' 'Active(anon): 5612320 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 730752 kB' 'Mapped: 144908 kB' 'Shmem: 4885172 kB' 'KReclaimable: 229780 kB' 'Slab: 760856 kB' 'SReclaimable: 229780 kB' 'SUnreclaim: 531076 kB' 'KernelStack: 26896 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460876 kB' 'Committed_AS: 8431208 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231348 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.145 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.146 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:10:16.147 09:23:09 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:10:16.147 09:23:09 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:16.147 09:23:09 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:16.147 09:23:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:16.147 ************************************ 00:10:16.147 START TEST default_setup 00:10:16.147 ************************************ 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:10:16.147 09:23:09 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:10:19.455 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:10:19.455 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:10:19.455 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:10:19.455 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:10:19.455 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:10:19.455 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:10:19.455 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:10:19.455 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:10:19.455 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:10:19.455 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:10:19.455 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:10:19.717 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:10:19.717 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:10:19.717 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:10:19.717 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:10:19.717 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:10:19.717 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.986 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111237952 kB' 'MemAvailable: 114476636 kB' 'Buffers: 11428 kB' 'Cached: 8822776 kB' 'SwapCached: 0 kB' 'Active: 6175404 kB' 'Inactive: 3403736 kB' 'Active(anon): 5630284 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 748312 kB' 'Mapped: 145128 kB' 'Shmem: 4885348 kB' 'KReclaimable: 228316 kB' 'Slab: 757132 kB' 'SReclaimable: 228316 kB' 'SUnreclaim: 528816 kB' 'KernelStack: 26944 kB' 'PageTables: 7516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8414720 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231268 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.987 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111237420 kB' 'MemAvailable: 114476084 kB' 'Buffers: 11428 kB' 'Cached: 8822780 kB' 'SwapCached: 0 kB' 'Active: 6175548 kB' 'Inactive: 3403736 kB' 'Active(anon): 5630428 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 748532 kB' 'Mapped: 145112 kB' 'Shmem: 4885352 kB' 'KReclaimable: 228276 kB' 'Slab: 757068 kB' 'SReclaimable: 228276 kB' 'SUnreclaim: 528792 kB' 'KernelStack: 27040 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8416348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231284 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.988 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.989 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111237396 kB' 'MemAvailable: 114476060 kB' 'Buffers: 11428 kB' 'Cached: 8822780 kB' 'SwapCached: 0 kB' 'Active: 6175572 kB' 'Inactive: 3403736 kB' 'Active(anon): 5630452 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 748500 kB' 'Mapped: 145112 kB' 'Shmem: 4885352 kB' 'KReclaimable: 228276 kB' 'Slab: 757096 kB' 'SReclaimable: 228276 kB' 'SUnreclaim: 528820 kB' 'KernelStack: 27040 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8416368 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231300 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.990 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.991 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:19.992 nr_hugepages=1024 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:19.992 resv_hugepages=0 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:19.992 surplus_hugepages=0 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:19.992 anon_hugepages=0 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111236672 kB' 'MemAvailable: 114475336 kB' 'Buffers: 11428 kB' 'Cached: 8822780 kB' 'SwapCached: 0 kB' 'Active: 6175820 kB' 'Inactive: 3403736 kB' 'Active(anon): 5630700 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 748748 kB' 'Mapped: 145112 kB' 'Shmem: 4885352 kB' 'KReclaimable: 228276 kB' 'Slab: 757096 kB' 'SReclaimable: 228276 kB' 'SUnreclaim: 528820 kB' 'KernelStack: 27072 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8416392 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231396 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.992 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.993 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 54559984 kB' 'MemUsed: 11099040 kB' 'SwapCached: 0 kB' 'Active: 4517880 kB' 'Inactive: 3261984 kB' 'Active(anon): 4095048 kB' 'Inactive(anon): 0 kB' 'Active(file): 422832 kB' 'Inactive(file): 3261984 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7465912 kB' 'Mapped: 92912 kB' 'AnonPages: 316964 kB' 'Shmem: 3781096 kB' 'KernelStack: 15336 kB' 'PageTables: 5044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99320 kB' 'Slab: 430484 kB' 'SReclaimable: 99320 kB' 'SUnreclaim: 331164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:19.994 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.257 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:20.258 node0=1024 expecting 1024 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:20.258 00:10:20.258 real 0m4.060s 00:10:20.258 user 0m1.563s 00:10:20.258 sys 0m2.516s 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:20.258 09:23:13 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:10:20.258 ************************************ 00:10:20.258 END TEST default_setup 00:10:20.258 ************************************ 00:10:20.258 09:23:13 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:10:20.258 09:23:13 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:20.258 09:23:13 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:20.258 09:23:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:20.258 ************************************ 00:10:20.258 START TEST per_node_1G_alloc 00:10:20.258 ************************************ 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:20.258 09:23:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:10:23.567 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:10:23.567 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:10:23.567 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:10:23.567 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:10:23.567 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:10:23.567 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:10:23.567 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:10:23.567 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:10:23.567 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:10:23.567 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:10:23.567 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:10:23.567 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:10:23.567 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:10:23.567 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:10:23.567 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:10:23.567 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:10:23.567 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.832 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111250716 kB' 'MemAvailable: 114489412 kB' 'Buffers: 11428 kB' 'Cached: 8822940 kB' 'SwapCached: 0 kB' 'Active: 6179052 kB' 'Inactive: 3403736 kB' 'Active(anon): 5633932 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 751200 kB' 'Mapped: 144200 kB' 'Shmem: 4885512 kB' 'KReclaimable: 228340 kB' 'Slab: 756868 kB' 'SReclaimable: 228340 kB' 'SUnreclaim: 528528 kB' 'KernelStack: 27008 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8406140 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231492 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.833 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:23.834 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111251096 kB' 'MemAvailable: 114489792 kB' 'Buffers: 11428 kB' 'Cached: 8822944 kB' 'SwapCached: 0 kB' 'Active: 6178200 kB' 'Inactive: 3403736 kB' 'Active(anon): 5633080 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 750844 kB' 'Mapped: 144108 kB' 'Shmem: 4885516 kB' 'KReclaimable: 228340 kB' 'Slab: 756904 kB' 'SReclaimable: 228340 kB' 'SUnreclaim: 528564 kB' 'KernelStack: 26944 kB' 'PageTables: 7700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8404552 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231492 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.835 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.836 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111252140 kB' 'MemAvailable: 114490836 kB' 'Buffers: 11428 kB' 'Cached: 8822944 kB' 'SwapCached: 0 kB' 'Active: 6178436 kB' 'Inactive: 3403736 kB' 'Active(anon): 5633316 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 751080 kB' 'Mapped: 144108 kB' 'Shmem: 4885516 kB' 'KReclaimable: 228340 kB' 'Slab: 756904 kB' 'SReclaimable: 228340 kB' 'SUnreclaim: 528564 kB' 'KernelStack: 27056 kB' 'PageTables: 7780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8406184 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231476 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.837 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.104 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.105 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:24.106 nr_hugepages=1024 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:24.106 resv_hugepages=0 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:24.106 surplus_hugepages=0 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:24.106 anon_hugepages=0 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111253024 kB' 'MemAvailable: 114491720 kB' 'Buffers: 11428 kB' 'Cached: 8822984 kB' 'SwapCached: 0 kB' 'Active: 6178488 kB' 'Inactive: 3403736 kB' 'Active(anon): 5633368 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 751104 kB' 'Mapped: 144108 kB' 'Shmem: 4885556 kB' 'KReclaimable: 228340 kB' 'Slab: 756904 kB' 'SReclaimable: 228340 kB' 'SUnreclaim: 528564 kB' 'KernelStack: 26944 kB' 'PageTables: 7688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8406204 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231508 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.106 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:10:24.107 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 55601940 kB' 'MemUsed: 10057084 kB' 'SwapCached: 0 kB' 'Active: 4514720 kB' 'Inactive: 3261984 kB' 'Active(anon): 4091888 kB' 'Inactive(anon): 0 kB' 'Active(file): 422832 kB' 'Inactive(file): 3261984 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7465928 kB' 'Mapped: 91900 kB' 'AnonPages: 313896 kB' 'Shmem: 3781112 kB' 'KernelStack: 15144 kB' 'PageTables: 5000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99320 kB' 'Slab: 430412 kB' 'SReclaimable: 99320 kB' 'SUnreclaim: 331092 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.108 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679828 kB' 'MemFree: 55649724 kB' 'MemUsed: 5030104 kB' 'SwapCached: 0 kB' 'Active: 1664216 kB' 'Inactive: 141752 kB' 'Active(anon): 1541928 kB' 'Inactive(anon): 0 kB' 'Active(file): 122288 kB' 'Inactive(file): 141752 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1368524 kB' 'Mapped: 52208 kB' 'AnonPages: 437572 kB' 'Shmem: 1104484 kB' 'KernelStack: 11816 kB' 'PageTables: 2980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129020 kB' 'Slab: 326492 kB' 'SReclaimable: 129020 kB' 'SUnreclaim: 197472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.109 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.110 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.111 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.111 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:10:24.111 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:10:24.111 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:24.111 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:24.111 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:24.111 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:24.111 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:10:24.111 node0=512 expecting 512 00:10:24.111 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:24.111 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:24.111 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:24.111 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:10:24.111 node1=512 expecting 512 00:10:24.111 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:10:24.111 00:10:24.111 real 0m3.876s 00:10:24.111 user 0m1.560s 00:10:24.111 sys 0m2.373s 00:10:24.111 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:24.111 09:23:17 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:10:24.111 ************************************ 00:10:24.111 END TEST per_node_1G_alloc 00:10:24.111 ************************************ 00:10:24.111 09:23:17 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:10:24.111 09:23:17 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:24.111 09:23:17 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:24.111 09:23:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:24.111 ************************************ 00:10:24.111 START TEST even_2G_alloc 00:10:24.111 ************************************ 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:24.111 09:23:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:10:27.412 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:10:27.412 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:10:27.412 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:10:27.412 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:10:27.412 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:10:27.412 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:10:27.412 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:10:27.412 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:10:27.412 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:10:27.412 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:10:27.412 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:10:27.412 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:10:27.412 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:10:27.412 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:10:27.412 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:10:27.412 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:10:27.412 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111247996 kB' 'MemAvailable: 114486676 kB' 'Buffers: 11428 kB' 'Cached: 8823120 kB' 'SwapCached: 0 kB' 'Active: 6188112 kB' 'Inactive: 3403736 kB' 'Active(anon): 5642992 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 760168 kB' 'Mapped: 144940 kB' 'Shmem: 4885692 kB' 'KReclaimable: 228308 kB' 'Slab: 757328 kB' 'SReclaimable: 228308 kB' 'SUnreclaim: 529020 kB' 'KernelStack: 26880 kB' 'PageTables: 7516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8409304 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231364 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.994 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.995 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111249380 kB' 'MemAvailable: 114488056 kB' 'Buffers: 11428 kB' 'Cached: 8823128 kB' 'SwapCached: 0 kB' 'Active: 6182048 kB' 'Inactive: 3403736 kB' 'Active(anon): 5636928 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 754564 kB' 'Mapped: 144140 kB' 'Shmem: 4885700 kB' 'KReclaimable: 228300 kB' 'Slab: 757352 kB' 'SReclaimable: 228300 kB' 'SUnreclaim: 529052 kB' 'KernelStack: 26864 kB' 'PageTables: 7432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8404140 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231348 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.996 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111254796 kB' 'MemAvailable: 114493472 kB' 'Buffers: 11428 kB' 'Cached: 8823144 kB' 'SwapCached: 0 kB' 'Active: 6182112 kB' 'Inactive: 3403736 kB' 'Active(anon): 5636992 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 754564 kB' 'Mapped: 144140 kB' 'Shmem: 4885716 kB' 'KReclaimable: 228300 kB' 'Slab: 757352 kB' 'SReclaimable: 228300 kB' 'SUnreclaim: 529052 kB' 'KernelStack: 26864 kB' 'PageTables: 7432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8404160 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231348 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.997 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.998 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.999 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:28.000 nr_hugepages=1024 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:28.000 resv_hugepages=0 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:28.000 surplus_hugepages=0 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:28.000 anon_hugepages=0 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111254608 kB' 'MemAvailable: 114493284 kB' 'Buffers: 11428 kB' 'Cached: 8823168 kB' 'SwapCached: 0 kB' 'Active: 6182092 kB' 'Inactive: 3403736 kB' 'Active(anon): 5636972 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 754596 kB' 'Mapped: 144140 kB' 'Shmem: 4885740 kB' 'KReclaimable: 228300 kB' 'Slab: 757352 kB' 'SReclaimable: 228300 kB' 'SUnreclaim: 529052 kB' 'KernelStack: 26864 kB' 'PageTables: 7436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8403988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231348 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.000 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.001 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.002 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 55615432 kB' 'MemUsed: 10043592 kB' 'SwapCached: 0 kB' 'Active: 4516620 kB' 'Inactive: 3261984 kB' 'Active(anon): 4093788 kB' 'Inactive(anon): 0 kB' 'Active(file): 422832 kB' 'Inactive(file): 3261984 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7466048 kB' 'Mapped: 91936 kB' 'AnonPages: 315828 kB' 'Shmem: 3781232 kB' 'KernelStack: 15032 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99312 kB' 'Slab: 430596 kB' 'SReclaimable: 99312 kB' 'SUnreclaim: 331284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.003 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.004 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679828 kB' 'MemFree: 55637608 kB' 'MemUsed: 5042220 kB' 'SwapCached: 0 kB' 'Active: 1665068 kB' 'Inactive: 141752 kB' 'Active(anon): 1542780 kB' 'Inactive(anon): 0 kB' 'Active(file): 122288 kB' 'Inactive(file): 141752 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1368604 kB' 'Mapped: 52204 kB' 'AnonPages: 438288 kB' 'Shmem: 1104564 kB' 'KernelStack: 11800 kB' 'PageTables: 2912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128988 kB' 'Slab: 326756 kB' 'SReclaimable: 128988 kB' 'SUnreclaim: 197768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.005 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.006 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:10:28.007 node0=512 expecting 512 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:10:28.007 node1=512 expecting 512 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:10:28.007 00:10:28.007 real 0m3.904s 00:10:28.007 user 0m1.571s 00:10:28.007 sys 0m2.391s 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:28.007 09:23:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:10:28.007 ************************************ 00:10:28.007 END TEST even_2G_alloc 00:10:28.007 ************************************ 00:10:28.007 09:23:21 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:10:28.007 09:23:21 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:28.007 09:23:21 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:28.007 09:23:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:28.268 ************************************ 00:10:28.268 START TEST odd_alloc 00:10:28.268 ************************************ 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:28.268 09:23:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:10:31.572 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:10:31.572 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:10:31.572 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:10:31.572 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:10:31.572 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:10:31.572 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:10:31.572 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:10:31.572 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:10:31.572 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:10:31.572 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:10:31.572 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:10:31.572 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:10:31.572 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:10:31.572 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:10:31.572 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:10:31.572 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:10:31.572 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111224492 kB' 'MemAvailable: 114463168 kB' 'Buffers: 11428 kB' 'Cached: 8819796 kB' 'SwapCached: 0 kB' 'Active: 6182996 kB' 'Inactive: 3403736 kB' 'Active(anon): 5637876 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 758508 kB' 'Mapped: 144220 kB' 'Shmem: 4882368 kB' 'KReclaimable: 228300 kB' 'Slab: 757852 kB' 'SReclaimable: 228300 kB' 'SUnreclaim: 529552 kB' 'KernelStack: 26848 kB' 'PageTables: 7372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8401416 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231220 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.838 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.839 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111222888 kB' 'MemAvailable: 114461564 kB' 'Buffers: 11428 kB' 'Cached: 8819800 kB' 'SwapCached: 0 kB' 'Active: 6182844 kB' 'Inactive: 3403736 kB' 'Active(anon): 5637724 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 758868 kB' 'Mapped: 144144 kB' 'Shmem: 4882372 kB' 'KReclaimable: 228300 kB' 'Slab: 757808 kB' 'SReclaimable: 228300 kB' 'SUnreclaim: 529508 kB' 'KernelStack: 26864 kB' 'PageTables: 7432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8401432 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231220 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.840 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:31.841 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111224100 kB' 'MemAvailable: 114462776 kB' 'Buffers: 11428 kB' 'Cached: 8819816 kB' 'SwapCached: 0 kB' 'Active: 6182896 kB' 'Inactive: 3403736 kB' 'Active(anon): 5637776 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 758872 kB' 'Mapped: 144144 kB' 'Shmem: 4882388 kB' 'KReclaimable: 228300 kB' 'Slab: 757808 kB' 'SReclaimable: 228300 kB' 'SUnreclaim: 529508 kB' 'KernelStack: 26864 kB' 'PageTables: 7432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8401452 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231204 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.842 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:10:31.843 nr_hugepages=1025 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:31.843 resv_hugepages=0 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:31.843 surplus_hugepages=0 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:31.843 anon_hugepages=0 00:10:31.843 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:10:31.844 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:10:31.844 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:31.844 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:31.844 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:10:31.844 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:31.844 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:31.844 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:31.844 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:31.844 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:31.844 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:31.844 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:31.844 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:31.844 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:31.844 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111226120 kB' 'MemAvailable: 114464796 kB' 'Buffers: 11428 kB' 'Cached: 8819836 kB' 'SwapCached: 0 kB' 'Active: 6183208 kB' 'Inactive: 3403736 kB' 'Active(anon): 5638088 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 759184 kB' 'Mapped: 144144 kB' 'Shmem: 4882408 kB' 'KReclaimable: 228300 kB' 'Slab: 757788 kB' 'SReclaimable: 228300 kB' 'SUnreclaim: 529488 kB' 'KernelStack: 26880 kB' 'PageTables: 7488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8404320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231204 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:31.844 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:31.844 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.109 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:10:32.110 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 55585032 kB' 'MemUsed: 10073992 kB' 'SwapCached: 0 kB' 'Active: 4517120 kB' 'Inactive: 3261984 kB' 'Active(anon): 4094288 kB' 'Inactive(anon): 0 kB' 'Active(file): 422832 kB' 'Inactive(file): 3261984 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7466148 kB' 'Mapped: 91940 kB' 'AnonPages: 316288 kB' 'Shmem: 3781332 kB' 'KernelStack: 15016 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99312 kB' 'Slab: 430976 kB' 'SReclaimable: 99312 kB' 'SUnreclaim: 331664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.111 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679828 kB' 'MemFree: 55642112 kB' 'MemUsed: 5037716 kB' 'SwapCached: 0 kB' 'Active: 1666324 kB' 'Inactive: 141752 kB' 'Active(anon): 1544036 kB' 'Inactive(anon): 0 kB' 'Active(file): 122288 kB' 'Inactive(file): 141752 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1365140 kB' 'Mapped: 52208 kB' 'AnonPages: 443104 kB' 'Shmem: 1101100 kB' 'KernelStack: 11832 kB' 'PageTables: 3100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128988 kB' 'Slab: 326812 kB' 'SReclaimable: 128988 kB' 'SUnreclaim: 197824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.112 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:10:32.113 node0=512 expecting 513 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:10:32.113 node1=513 expecting 512 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:10:32.113 00:10:32.113 real 0m3.905s 00:10:32.113 user 0m1.630s 00:10:32.113 sys 0m2.330s 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:32.113 09:23:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:10:32.113 ************************************ 00:10:32.113 END TEST odd_alloc 00:10:32.113 ************************************ 00:10:32.113 09:23:25 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:10:32.113 09:23:25 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:32.113 09:23:25 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:32.113 09:23:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:32.114 ************************************ 00:10:32.114 START TEST custom_alloc 00:10:32.114 ************************************ 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:32.114 09:23:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:10:35.417 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:10:35.417 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:10:35.417 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:10:35.417 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:10:35.417 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:10:35.417 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:10:35.417 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:10:35.417 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:10:35.417 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:10:35.417 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:10:35.417 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:10:35.417 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:10:35.417 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:10:35.417 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:10:35.417 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:10:35.417 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:10:35.417 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110182120 kB' 'MemAvailable: 113420796 kB' 'Buffers: 11428 kB' 'Cached: 8819972 kB' 'SwapCached: 0 kB' 'Active: 6188852 kB' 'Inactive: 3403736 kB' 'Active(anon): 5643732 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 764516 kB' 'Mapped: 144236 kB' 'Shmem: 4882544 kB' 'KReclaimable: 228300 kB' 'Slab: 758484 kB' 'SReclaimable: 228300 kB' 'SUnreclaim: 530184 kB' 'KernelStack: 26848 kB' 'PageTables: 7064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8403548 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231380 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.996 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:35.997 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110182512 kB' 'MemAvailable: 113421184 kB' 'Buffers: 11428 kB' 'Cached: 8819976 kB' 'SwapCached: 0 kB' 'Active: 6189596 kB' 'Inactive: 3403736 kB' 'Active(anon): 5644476 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 765332 kB' 'Mapped: 144236 kB' 'Shmem: 4882548 kB' 'KReclaimable: 228292 kB' 'Slab: 758028 kB' 'SReclaimable: 228292 kB' 'SUnreclaim: 529736 kB' 'KernelStack: 27024 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8405052 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231508 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.998 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:35.999 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110181932 kB' 'MemAvailable: 113420604 kB' 'Buffers: 11428 kB' 'Cached: 8819992 kB' 'SwapCached: 0 kB' 'Active: 6190012 kB' 'Inactive: 3403736 kB' 'Active(anon): 5644892 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 765732 kB' 'Mapped: 144160 kB' 'Shmem: 4882564 kB' 'KReclaimable: 228292 kB' 'Slab: 757876 kB' 'SReclaimable: 228292 kB' 'SUnreclaim: 529584 kB' 'KernelStack: 27056 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8403588 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231460 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.000 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:10:36.001 nr_hugepages=1536 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:36.001 resv_hugepages=0 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:36.001 surplus_hugepages=0 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:36.001 anon_hugepages=0 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.001 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110181760 kB' 'MemAvailable: 113420432 kB' 'Buffers: 11428 kB' 'Cached: 8820016 kB' 'SwapCached: 0 kB' 'Active: 6189556 kB' 'Inactive: 3403736 kB' 'Active(anon): 5644436 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 765184 kB' 'Mapped: 144160 kB' 'Shmem: 4882588 kB' 'KReclaimable: 228292 kB' 'Slab: 757876 kB' 'SReclaimable: 228292 kB' 'SUnreclaim: 529584 kB' 'KernelStack: 26976 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8403608 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231476 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.002 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 55580604 kB' 'MemUsed: 10078420 kB' 'SwapCached: 0 kB' 'Active: 4519472 kB' 'Inactive: 3261984 kB' 'Active(anon): 4096640 kB' 'Inactive(anon): 0 kB' 'Active(file): 422832 kB' 'Inactive(file): 3261984 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7466252 kB' 'Mapped: 91952 kB' 'AnonPages: 318384 kB' 'Shmem: 3781436 kB' 'KernelStack: 15224 kB' 'PageTables: 5012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99304 kB' 'Slab: 430836 kB' 'SReclaimable: 99304 kB' 'SUnreclaim: 331532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.003 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.004 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679828 kB' 'MemFree: 54599872 kB' 'MemUsed: 6079956 kB' 'SwapCached: 0 kB' 'Active: 1670596 kB' 'Inactive: 141752 kB' 'Active(anon): 1548308 kB' 'Inactive(anon): 0 kB' 'Active(file): 122288 kB' 'Inactive(file): 141752 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1365232 kB' 'Mapped: 52208 kB' 'AnonPages: 447300 kB' 'Shmem: 1101192 kB' 'KernelStack: 11864 kB' 'PageTables: 3076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 128988 kB' 'Slab: 327028 kB' 'SReclaimable: 128988 kB' 'SUnreclaim: 198040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.005 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:10:36.006 node0=512 expecting 512 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:10:36.006 node1=1024 expecting 1024 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:10:36.006 00:10:36.006 real 0m3.879s 00:10:36.006 user 0m1.625s 00:10:36.006 sys 0m2.312s 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:36.006 09:23:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:10:36.006 ************************************ 00:10:36.006 END TEST custom_alloc 00:10:36.006 ************************************ 00:10:36.006 09:23:29 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:10:36.006 09:23:29 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:36.006 09:23:29 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:36.006 09:23:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:36.006 ************************************ 00:10:36.006 START TEST no_shrink_alloc 00:10:36.006 ************************************ 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:36.006 09:23:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:10:39.310 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:10:39.310 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:10:39.310 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:10:39.310 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:10:39.310 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:10:39.310 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:10:39.310 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:10:39.310 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:10:39.310 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:10:39.310 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:10:39.310 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:10:39.571 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:10:39.571 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:10:39.571 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:10:39.571 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:10:39.571 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:10:39.571 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:10:39.838 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:10:39.838 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:10:39.838 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:10:39.838 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:10:39.838 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:10:39.838 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:10:39.838 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:10:39.838 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:39.838 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111224604 kB' 'MemAvailable: 114463272 kB' 'Buffers: 11428 kB' 'Cached: 8820220 kB' 'SwapCached: 0 kB' 'Active: 6191976 kB' 'Inactive: 3403736 kB' 'Active(anon): 5646856 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 766964 kB' 'Mapped: 144520 kB' 'Shmem: 4882792 kB' 'KReclaimable: 228284 kB' 'Slab: 757988 kB' 'SReclaimable: 228284 kB' 'SUnreclaim: 529704 kB' 'KernelStack: 26864 kB' 'PageTables: 7476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8403756 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231332 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.839 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111224152 kB' 'MemAvailable: 114462820 kB' 'Buffers: 11428 kB' 'Cached: 8820400 kB' 'SwapCached: 0 kB' 'Active: 6192404 kB' 'Inactive: 3403736 kB' 'Active(anon): 5647284 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 767236 kB' 'Mapped: 144252 kB' 'Shmem: 4882972 kB' 'KReclaimable: 228284 kB' 'Slab: 757988 kB' 'SReclaimable: 228284 kB' 'SUnreclaim: 529704 kB' 'KernelStack: 26864 kB' 'PageTables: 7456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8403772 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231300 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.840 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.841 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111223144 kB' 'MemAvailable: 114461812 kB' 'Buffers: 11428 kB' 'Cached: 8820416 kB' 'SwapCached: 0 kB' 'Active: 6191948 kB' 'Inactive: 3403736 kB' 'Active(anon): 5646828 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 767204 kB' 'Mapped: 144176 kB' 'Shmem: 4882988 kB' 'KReclaimable: 228284 kB' 'Slab: 757976 kB' 'SReclaimable: 228284 kB' 'SUnreclaim: 529692 kB' 'KernelStack: 26864 kB' 'PageTables: 7452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8403796 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231284 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.842 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.843 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:39.844 nr_hugepages=1024 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:39.844 resv_hugepages=0 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:39.844 surplus_hugepages=0 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:39.844 anon_hugepages=0 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111223144 kB' 'MemAvailable: 114461812 kB' 'Buffers: 11428 kB' 'Cached: 8820456 kB' 'SwapCached: 0 kB' 'Active: 6191604 kB' 'Inactive: 3403736 kB' 'Active(anon): 5646484 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 766800 kB' 'Mapped: 144176 kB' 'Shmem: 4883028 kB' 'KReclaimable: 228284 kB' 'Slab: 757976 kB' 'SReclaimable: 228284 kB' 'SUnreclaim: 529692 kB' 'KernelStack: 26848 kB' 'PageTables: 7404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8403820 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231284 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.844 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.845 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 54540128 kB' 'MemUsed: 11118896 kB' 'SwapCached: 0 kB' 'Active: 4518208 kB' 'Inactive: 3261984 kB' 'Active(anon): 4095376 kB' 'Inactive(anon): 0 kB' 'Active(file): 422832 kB' 'Inactive(file): 3261984 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7466348 kB' 'Mapped: 91972 kB' 'AnonPages: 317088 kB' 'Shmem: 3781532 kB' 'KernelStack: 15016 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99296 kB' 'Slab: 431332 kB' 'SReclaimable: 99296 kB' 'SUnreclaim: 332036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.846 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:39.847 node0=1024 expecting 1024 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:10:39.847 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:39.848 09:23:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:10:43.154 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:10:43.154 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:10:43.154 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:10:43.154 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:10:43.154 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:10:43.154 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:10:43.154 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:10:43.154 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:10:43.415 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:10:43.415 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:10:43.415 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:10:43.415 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:10:43.415 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:10:43.415 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:10:43.415 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:10:43.415 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:10:43.415 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:10:43.683 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111229128 kB' 'MemAvailable: 114467796 kB' 'Buffers: 11428 kB' 'Cached: 8820548 kB' 'SwapCached: 0 kB' 'Active: 6198812 kB' 'Inactive: 3403736 kB' 'Active(anon): 5653692 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 773940 kB' 'Mapped: 144268 kB' 'Shmem: 4883120 kB' 'KReclaimable: 228284 kB' 'Slab: 757800 kB' 'SReclaimable: 228284 kB' 'SUnreclaim: 529516 kB' 'KernelStack: 26864 kB' 'PageTables: 7516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8404724 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231316 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.683 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.684 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111229540 kB' 'MemAvailable: 114468208 kB' 'Buffers: 11428 kB' 'Cached: 8820552 kB' 'SwapCached: 0 kB' 'Active: 6198376 kB' 'Inactive: 3403736 kB' 'Active(anon): 5653256 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 773528 kB' 'Mapped: 144188 kB' 'Shmem: 4883124 kB' 'KReclaimable: 228284 kB' 'Slab: 757848 kB' 'SReclaimable: 228284 kB' 'SUnreclaim: 529564 kB' 'KernelStack: 26864 kB' 'PageTables: 7460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8404740 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231300 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.685 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.686 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111228784 kB' 'MemAvailable: 114467452 kB' 'Buffers: 11428 kB' 'Cached: 8820572 kB' 'SwapCached: 0 kB' 'Active: 6198392 kB' 'Inactive: 3403736 kB' 'Active(anon): 5653272 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 773524 kB' 'Mapped: 144188 kB' 'Shmem: 4883144 kB' 'KReclaimable: 228284 kB' 'Slab: 757848 kB' 'SReclaimable: 228284 kB' 'SUnreclaim: 529564 kB' 'KernelStack: 26864 kB' 'PageTables: 7460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8404764 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231300 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.687 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.688 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:43.689 nr_hugepages=1024 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:43.689 resv_hugepages=0 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:43.689 surplus_hugepages=0 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:43.689 anon_hugepages=0 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:43.689 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 111228964 kB' 'MemAvailable: 114467632 kB' 'Buffers: 11428 kB' 'Cached: 8820600 kB' 'SwapCached: 0 kB' 'Active: 6198720 kB' 'Inactive: 3403736 kB' 'Active(anon): 5653600 kB' 'Inactive(anon): 0 kB' 'Active(file): 545120 kB' 'Inactive(file): 3403736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 773876 kB' 'Mapped: 144188 kB' 'Shmem: 4883172 kB' 'KReclaimable: 228284 kB' 'Slab: 757848 kB' 'SReclaimable: 228284 kB' 'SUnreclaim: 529564 kB' 'KernelStack: 26864 kB' 'PageTables: 7464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8404784 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 231300 kB' 'VmallocChunk: 0 kB' 'Percpu: 96768 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 574852 kB' 'DirectMap2M: 10639360 kB' 'DirectMap1G: 124780544 kB' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.690 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:10:43.691 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 54555016 kB' 'MemUsed: 11104008 kB' 'SwapCached: 0 kB' 'Active: 4517632 kB' 'Inactive: 3261984 kB' 'Active(anon): 4094800 kB' 'Inactive(anon): 0 kB' 'Active(file): 422832 kB' 'Inactive(file): 3261984 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7466488 kB' 'Mapped: 91984 kB' 'AnonPages: 316320 kB' 'Shmem: 3781672 kB' 'KernelStack: 15000 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 99296 kB' 'Slab: 431208 kB' 'SReclaimable: 99296 kB' 'SUnreclaim: 331912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.692 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:43.693 node0=1024 expecting 1024 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:43.693 00:10:43.693 real 0m7.689s 00:10:43.693 user 0m3.019s 00:10:43.693 sys 0m4.790s 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:43.693 09:23:37 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:10:43.693 ************************************ 00:10:43.693 END TEST no_shrink_alloc 00:10:43.693 ************************************ 00:10:43.956 09:23:37 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:10:43.956 09:23:37 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:10:43.956 09:23:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:10:43.956 09:23:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:43.956 09:23:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:10:43.956 09:23:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:43.956 09:23:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:10:43.956 09:23:37 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:10:43.956 09:23:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:43.956 09:23:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:10:43.956 09:23:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:43.956 09:23:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:10:43.956 09:23:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:10:43.956 09:23:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:10:43.956 00:10:43.956 real 0m28.001s 00:10:43.956 user 0m11.224s 00:10:43.956 sys 0m17.159s 00:10:43.956 09:23:37 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:43.956 09:23:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:43.956 ************************************ 00:10:43.956 END TEST hugepages 00:10:43.956 ************************************ 00:10:43.956 09:23:37 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:10:43.956 09:23:37 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:43.956 09:23:37 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:43.956 09:23:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:10:43.956 ************************************ 00:10:43.956 START TEST driver 00:10:43.956 ************************************ 00:10:43.956 09:23:37 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:10:43.956 * Looking for test storage... 00:10:43.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:10:43.956 09:23:37 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:10:43.956 09:23:37 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:43.956 09:23:37 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:10:49.249 09:23:42 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:10:49.249 09:23:42 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:49.249 09:23:42 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:49.249 09:23:42 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:10:49.249 ************************************ 00:10:49.249 START TEST guess_driver 00:10:49.250 ************************************ 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:10:49.250 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:10:49.250 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:10:49.250 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:10:49.250 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:10:49.250 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:10:49.250 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:10:49.250 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:10:49.250 Looking for driver=vfio-pci 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:10:49.250 09:23:42 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:10:52.557 09:23:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:52.557 09:23:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:10:52.557 09:23:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:10:52.557 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:10:52.819 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:53.080 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:10:53.080 09:23:46 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:10:53.080 09:23:46 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:53.080 09:23:46 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:10:58.375 00:10:58.375 real 0m9.075s 00:10:58.375 user 0m2.994s 00:10:58.375 sys 0m5.203s 00:10:58.375 09:23:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:58.375 09:23:51 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:10:58.375 ************************************ 00:10:58.375 END TEST guess_driver 00:10:58.375 ************************************ 00:10:58.375 00:10:58.375 real 0m14.326s 00:10:58.375 user 0m4.514s 00:10:58.375 sys 0m8.037s 00:10:58.375 09:23:51 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:58.375 09:23:51 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:10:58.375 ************************************ 00:10:58.375 END TEST driver 00:10:58.375 ************************************ 00:10:58.375 09:23:51 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:10:58.375 09:23:51 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:58.375 09:23:51 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:58.375 09:23:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:10:58.375 ************************************ 00:10:58.375 START TEST devices 00:10:58.375 ************************************ 00:10:58.375 09:23:51 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:10:58.375 * Looking for test storage... 00:10:58.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:10:58.375 09:23:51 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:10:58.375 09:23:51 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:10:58.375 09:23:51 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:58.375 09:23:51 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:11:02.587 09:23:55 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:11:02.587 09:23:55 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:11:02.587 09:23:55 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:11:02.587 09:23:55 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:11:02.587 09:23:55 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:02.587 09:23:55 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:11:02.587 09:23:55 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:11:02.587 09:23:55 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:02.587 09:23:55 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:02.587 09:23:55 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:11:02.587 09:23:55 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:11:02.587 09:23:55 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:11:02.587 09:23:55 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:11:02.587 09:23:55 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:11:02.587 09:23:55 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:02.587 09:23:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:11:02.587 09:23:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:11:02.587 09:23:55 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:11:02.587 09:23:55 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:11:02.587 09:23:55 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:11:02.587 09:23:55 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:11:02.587 09:23:55 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:11:02.587 No valid GPT data, bailing 00:11:02.587 09:23:55 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:02.587 09:23:55 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:11:02.587 09:23:55 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:11:02.587 09:23:55 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:11:02.587 09:23:55 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:02.587 09:23:55 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:02.587 09:23:55 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:11:02.587 09:23:55 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:11:02.588 09:23:55 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:02.588 09:23:55 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:11:02.588 09:23:55 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:11:02.588 09:23:55 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:11:02.588 09:23:55 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:11:02.588 09:23:55 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:02.588 09:23:55 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:02.588 09:23:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:11:02.588 ************************************ 00:11:02.588 START TEST nvme_mount 00:11:02.588 ************************************ 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:11:02.588 09:23:56 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:11:03.532 Creating new GPT entries in memory. 00:11:03.532 GPT data structures destroyed! You may now partition the disk using fdisk or 00:11:03.532 other utilities. 00:11:03.532 09:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:11:03.532 09:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:03.532 09:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:03.532 09:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:03.532 09:23:57 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:11:04.921 Creating new GPT entries in memory. 00:11:04.921 The operation has completed successfully. 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 111600 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:04.921 09:23:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:08.230 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:08.492 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:08.492 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:11:08.492 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:08.492 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:08.492 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:08.492 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:11:08.492 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:08.492 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:08.492 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:08.492 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:11:08.492 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:08.492 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:08.492 09:24:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:08.754 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:11:08.754 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:11:08.754 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:08.754 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:08.754 09:24:02 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:11:12.059 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:12.059 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:12.059 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:12.059 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:12.059 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:12.059 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:12.059 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:12.060 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:12.633 09:24:05 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:11:15.939 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:15.939 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.939 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:15.939 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.939 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:15.939 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.939 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:15.939 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.939 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:15.939 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.939 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:15.939 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.939 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:15.939 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.939 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:15.940 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:16.201 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:16.201 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:11:16.201 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:11:16.201 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:11:16.201 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:16.201 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:16.201 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:16.201 09:24:09 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:16.201 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:16.201 00:11:16.201 real 0m13.694s 00:11:16.201 user 0m4.241s 00:11:16.201 sys 0m7.305s 00:11:16.201 09:24:09 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:16.201 09:24:09 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:11:16.201 ************************************ 00:11:16.201 END TEST nvme_mount 00:11:16.201 ************************************ 00:11:16.201 09:24:09 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:11:16.201 09:24:09 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:16.201 09:24:09 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:16.201 09:24:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:11:16.463 ************************************ 00:11:16.463 START TEST dm_mount 00:11:16.463 ************************************ 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:11:16.463 09:24:09 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:11:17.407 Creating new GPT entries in memory. 00:11:17.407 GPT data structures destroyed! You may now partition the disk using fdisk or 00:11:17.407 other utilities. 00:11:17.407 09:24:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:11:17.407 09:24:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:17.407 09:24:10 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:17.407 09:24:10 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:17.407 09:24:10 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:11:18.364 Creating new GPT entries in memory. 00:11:18.364 The operation has completed successfully. 00:11:18.364 09:24:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:18.364 09:24:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:18.364 09:24:11 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:18.364 09:24:11 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:18.364 09:24:11 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:11:19.312 The operation has completed successfully. 00:11:19.312 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:19.312 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:19.312 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 117355 00:11:19.574 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:19.575 09:24:12 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:11:22.878 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:22.878 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.878 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:22.878 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.878 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:22.878 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.878 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:22.878 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.878 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:22.878 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.878 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:22.878 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:22.879 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:23.140 09:24:16 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.447 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:26.448 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.448 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:26.448 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.448 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:11:26.448 09:24:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:27.022 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:27.022 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:11:27.022 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:11:27.022 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:11:27.022 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:11:27.022 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:11:27.022 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:11:27.022 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:27.022 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:11:27.022 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:27.022 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:11:27.022 09:24:20 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:11:27.022 00:11:27.022 real 0m10.605s 00:11:27.022 user 0m2.773s 00:11:27.022 sys 0m4.884s 00:11:27.022 09:24:20 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:27.022 09:24:20 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:11:27.022 ************************************ 00:11:27.022 END TEST dm_mount 00:11:27.022 ************************************ 00:11:27.022 09:24:20 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:11:27.022 09:24:20 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:11:27.022 09:24:20 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:11:27.022 09:24:20 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:27.022 09:24:20 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:11:27.022 09:24:20 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:27.022 09:24:20 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:27.283 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:11:27.283 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:11:27.283 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:27.283 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:27.283 09:24:20 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:11:27.283 09:24:20 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:11:27.283 09:24:20 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:11:27.283 09:24:20 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:27.283 09:24:20 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:11:27.283 09:24:20 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:11:27.283 09:24:20 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:11:27.283 00:11:27.283 real 0m28.932s 00:11:27.283 user 0m8.678s 00:11:27.283 sys 0m15.033s 00:11:27.283 09:24:20 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:27.283 09:24:20 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:11:27.283 ************************************ 00:11:27.283 END TEST devices 00:11:27.283 ************************************ 00:11:27.283 00:11:27.283 real 1m38.204s 00:11:27.283 user 0m33.381s 00:11:27.283 sys 0m55.877s 00:11:27.283 09:24:20 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:27.283 09:24:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:27.283 ************************************ 00:11:27.283 END TEST setup.sh 00:11:27.283 ************************************ 00:11:27.283 09:24:20 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:11:30.592 Hugepages 00:11:30.592 node hugesize free / total 00:11:30.592 node0 1048576kB 0 / 0 00:11:30.592 node0 2048kB 2048 / 2048 00:11:30.592 node1 1048576kB 0 / 0 00:11:30.592 node1 2048kB 0 / 0 00:11:30.592 00:11:30.592 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:30.855 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:11:30.855 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:11:30.855 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:11:30.855 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:11:30.855 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:11:30.855 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:11:30.855 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:11:30.855 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:11:30.855 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:11:30.855 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:11:30.855 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:11:30.856 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:11:30.856 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:11:30.856 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:11:30.856 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:11:30.856 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:11:30.856 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:11:30.856 09:24:24 -- spdk/autotest.sh@130 -- # uname -s 00:11:30.856 09:24:24 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:11:30.856 09:24:24 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:11:30.856 09:24:24 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:11:34.167 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:11:34.429 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:11:34.429 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:11:34.429 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:11:34.429 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:11:34.429 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:11:34.429 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:11:34.429 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:11:34.429 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:11:34.429 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:11:34.429 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:11:34.429 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:11:34.429 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:11:34.429 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:11:34.429 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:11:34.429 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:11:36.348 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:11:36.610 09:24:30 -- common/autotest_common.sh@1528 -- # sleep 1 00:11:37.554 09:24:31 -- common/autotest_common.sh@1529 -- # bdfs=() 00:11:37.554 09:24:31 -- common/autotest_common.sh@1529 -- # local bdfs 00:11:37.554 09:24:31 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:11:37.554 09:24:31 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:11:37.554 09:24:31 -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:37.554 09:24:31 -- common/autotest_common.sh@1509 -- # local bdfs 00:11:37.554 09:24:31 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:37.554 09:24:31 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:11:37.554 09:24:31 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:37.554 09:24:31 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:11:37.554 09:24:31 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:11:37.554 09:24:31 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:11:40.864 Waiting for block devices as requested 00:11:41.126 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:11:41.126 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:11:41.126 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:11:41.388 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:11:41.388 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:11:41.388 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:11:41.388 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:11:41.649 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:11:41.649 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:11:41.911 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:11:41.911 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:11:41.911 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:11:42.173 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:11:42.173 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:11:42.173 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:11:42.434 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:11:42.434 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:11:42.696 09:24:36 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:11:42.696 09:24:36 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:11:42.696 09:24:36 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:11:42.696 09:24:36 -- common/autotest_common.sh@1498 -- # grep 0000:65:00.0/nvme/nvme 00:11:42.696 09:24:36 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:11:42.696 09:24:36 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:11:42.696 09:24:36 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:11:42.696 09:24:36 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:11:42.696 09:24:36 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:11:42.696 09:24:36 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:11:42.696 09:24:36 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:11:42.696 09:24:36 -- common/autotest_common.sh@1541 -- # grep oacs 00:11:42.696 09:24:36 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:11:42.696 09:24:36 -- common/autotest_common.sh@1541 -- # oacs=' 0x5f' 00:11:42.696 09:24:36 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:11:42.696 09:24:36 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:11:42.696 09:24:36 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:11:42.696 09:24:36 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:11:42.696 09:24:36 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:11:42.696 09:24:36 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:11:42.696 09:24:36 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:11:42.696 09:24:36 -- common/autotest_common.sh@1553 -- # continue 00:11:42.696 09:24:36 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:11:42.696 09:24:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.696 09:24:36 -- common/autotest_common.sh@10 -- # set +x 00:11:42.696 09:24:36 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:11:42.696 09:24:36 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:42.696 09:24:36 -- common/autotest_common.sh@10 -- # set +x 00:11:42.696 09:24:36 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:11:46.001 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:11:46.001 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:11:46.001 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:11:46.261 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:11:46.261 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:11:46.261 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:11:46.261 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:11:46.261 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:11:46.261 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:11:46.261 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:11:46.261 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:11:46.261 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:11:46.261 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:11:46.261 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:11:46.261 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:11:46.261 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:11:46.261 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:11:46.831 09:24:40 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:11:46.831 09:24:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:46.831 09:24:40 -- common/autotest_common.sh@10 -- # set +x 00:11:46.831 09:24:40 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:11:46.831 09:24:40 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:11:46.831 09:24:40 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:11:46.831 09:24:40 -- common/autotest_common.sh@1573 -- # bdfs=() 00:11:46.831 09:24:40 -- common/autotest_common.sh@1573 -- # local bdfs 00:11:46.831 09:24:40 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:11:46.831 09:24:40 -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:46.831 09:24:40 -- common/autotest_common.sh@1509 -- # local bdfs 00:11:46.831 09:24:40 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:46.831 09:24:40 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:11:46.831 09:24:40 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:46.831 09:24:40 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:11:46.831 09:24:40 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:11:46.831 09:24:40 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:11:46.831 09:24:40 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:11:46.831 09:24:40 -- common/autotest_common.sh@1576 -- # device=0xa80a 00:11:46.831 09:24:40 -- common/autotest_common.sh@1577 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:11:46.831 09:24:40 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:11:46.831 09:24:40 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:11:46.831 09:24:40 -- common/autotest_common.sh@1589 -- # return 0 00:11:46.831 09:24:40 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:11:46.831 09:24:40 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:11:46.831 09:24:40 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:11:46.831 09:24:40 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:11:46.831 09:24:40 -- spdk/autotest.sh@162 -- # timing_enter lib 00:11:46.831 09:24:40 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:46.831 09:24:40 -- common/autotest_common.sh@10 -- # set +x 00:11:46.831 09:24:40 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:11:46.831 09:24:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:46.831 09:24:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:46.831 09:24:40 -- common/autotest_common.sh@10 -- # set +x 00:11:46.831 ************************************ 00:11:46.832 START TEST env 00:11:46.832 ************************************ 00:11:46.832 09:24:40 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:11:47.092 * Looking for test storage... 00:11:47.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:11:47.092 09:24:40 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:11:47.092 09:24:40 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:47.092 09:24:40 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:47.092 09:24:40 env -- common/autotest_common.sh@10 -- # set +x 00:11:47.092 ************************************ 00:11:47.092 START TEST env_memory 00:11:47.092 ************************************ 00:11:47.092 09:24:40 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:11:47.092 00:11:47.092 00:11:47.092 CUnit - A unit testing framework for C - Version 2.1-3 00:11:47.092 http://cunit.sourceforge.net/ 00:11:47.092 00:11:47.092 00:11:47.092 Suite: memory 00:11:47.092 Test: alloc and free memory map ...[2024-05-16 09:24:40.517513] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:11:47.092 passed 00:11:47.092 Test: mem map translation ...[2024-05-16 09:24:40.543171] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:11:47.092 [2024-05-16 09:24:40.543199] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:11:47.092 [2024-05-16 09:24:40.543246] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:11:47.092 [2024-05-16 09:24:40.543255] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:11:47.092 passed 00:11:47.092 Test: mem map registration ...[2024-05-16 09:24:40.598540] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:11:47.092 [2024-05-16 09:24:40.598558] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:11:47.092 passed 00:11:47.354 Test: mem map adjacent registrations ...passed 00:11:47.354 00:11:47.354 Run Summary: Type Total Ran Passed Failed Inactive 00:11:47.354 suites 1 1 n/a 0 0 00:11:47.354 tests 4 4 4 0 0 00:11:47.354 asserts 152 152 152 0 n/a 00:11:47.354 00:11:47.354 Elapsed time = 0.192 seconds 00:11:47.354 00:11:47.354 real 0m0.207s 00:11:47.354 user 0m0.198s 00:11:47.354 sys 0m0.008s 00:11:47.354 09:24:40 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:47.354 09:24:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:11:47.354 ************************************ 00:11:47.354 END TEST env_memory 00:11:47.354 ************************************ 00:11:47.354 09:24:40 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:11:47.354 09:24:40 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:47.354 09:24:40 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:47.354 09:24:40 env -- common/autotest_common.sh@10 -- # set +x 00:11:47.354 ************************************ 00:11:47.354 START TEST env_vtophys 00:11:47.354 ************************************ 00:11:47.354 09:24:40 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:11:47.354 EAL: lib.eal log level changed from notice to debug 00:11:47.354 EAL: Detected lcore 0 as core 0 on socket 0 00:11:47.354 EAL: Detected lcore 1 as core 1 on socket 0 00:11:47.354 EAL: Detected lcore 2 as core 2 on socket 0 00:11:47.354 EAL: Detected lcore 3 as core 3 on socket 0 00:11:47.354 EAL: Detected lcore 4 as core 4 on socket 0 00:11:47.354 EAL: Detected lcore 5 as core 5 on socket 0 00:11:47.354 EAL: Detected lcore 6 as core 6 on socket 0 00:11:47.354 EAL: Detected lcore 7 as core 7 on socket 0 00:11:47.354 EAL: Detected lcore 8 as core 8 on socket 0 00:11:47.354 EAL: Detected lcore 9 as core 9 on socket 0 00:11:47.355 EAL: Detected lcore 10 as core 10 on socket 0 00:11:47.355 EAL: Detected lcore 11 as core 11 on socket 0 00:11:47.355 EAL: Detected lcore 12 as core 12 on socket 0 00:11:47.355 EAL: Detected lcore 13 as core 13 on socket 0 00:11:47.355 EAL: Detected lcore 14 as core 14 on socket 0 00:11:47.355 EAL: Detected lcore 15 as core 15 on socket 0 00:11:47.355 EAL: Detected lcore 16 as core 16 on socket 0 00:11:47.355 EAL: Detected lcore 17 as core 17 on socket 0 00:11:47.355 EAL: Detected lcore 18 as core 18 on socket 0 00:11:47.355 EAL: Detected lcore 19 as core 19 on socket 0 00:11:47.355 EAL: Detected lcore 20 as core 20 on socket 0 00:11:47.355 EAL: Detected lcore 21 as core 21 on socket 0 00:11:47.355 EAL: Detected lcore 22 as core 22 on socket 0 00:11:47.355 EAL: Detected lcore 23 as core 23 on socket 0 00:11:47.355 EAL: Detected lcore 24 as core 24 on socket 0 00:11:47.355 EAL: Detected lcore 25 as core 25 on socket 0 00:11:47.355 EAL: Detected lcore 26 as core 26 on socket 0 00:11:47.355 EAL: Detected lcore 27 as core 27 on socket 0 00:11:47.355 EAL: Detected lcore 28 as core 28 on socket 0 00:11:47.355 EAL: Detected lcore 29 as core 29 on socket 0 00:11:47.355 EAL: Detected lcore 30 as core 30 on socket 0 00:11:47.355 EAL: Detected lcore 31 as core 31 on socket 0 00:11:47.355 EAL: Detected lcore 32 as core 32 on socket 0 00:11:47.355 EAL: Detected lcore 33 as core 33 on socket 0 00:11:47.355 EAL: Detected lcore 34 as core 34 on socket 0 00:11:47.355 EAL: Detected lcore 35 as core 35 on socket 0 00:11:47.355 EAL: Detected lcore 36 as core 0 on socket 1 00:11:47.355 EAL: Detected lcore 37 as core 1 on socket 1 00:11:47.355 EAL: Detected lcore 38 as core 2 on socket 1 00:11:47.355 EAL: Detected lcore 39 as core 3 on socket 1 00:11:47.355 EAL: Detected lcore 40 as core 4 on socket 1 00:11:47.355 EAL: Detected lcore 41 as core 5 on socket 1 00:11:47.355 EAL: Detected lcore 42 as core 6 on socket 1 00:11:47.355 EAL: Detected lcore 43 as core 7 on socket 1 00:11:47.355 EAL: Detected lcore 44 as core 8 on socket 1 00:11:47.355 EAL: Detected lcore 45 as core 9 on socket 1 00:11:47.355 EAL: Detected lcore 46 as core 10 on socket 1 00:11:47.355 EAL: Detected lcore 47 as core 11 on socket 1 00:11:47.355 EAL: Detected lcore 48 as core 12 on socket 1 00:11:47.355 EAL: Detected lcore 49 as core 13 on socket 1 00:11:47.355 EAL: Detected lcore 50 as core 14 on socket 1 00:11:47.355 EAL: Detected lcore 51 as core 15 on socket 1 00:11:47.355 EAL: Detected lcore 52 as core 16 on socket 1 00:11:47.355 EAL: Detected lcore 53 as core 17 on socket 1 00:11:47.355 EAL: Detected lcore 54 as core 18 on socket 1 00:11:47.355 EAL: Detected lcore 55 as core 19 on socket 1 00:11:47.355 EAL: Detected lcore 56 as core 20 on socket 1 00:11:47.355 EAL: Detected lcore 57 as core 21 on socket 1 00:11:47.355 EAL: Detected lcore 58 as core 22 on socket 1 00:11:47.355 EAL: Detected lcore 59 as core 23 on socket 1 00:11:47.355 EAL: Detected lcore 60 as core 24 on socket 1 00:11:47.355 EAL: Detected lcore 61 as core 25 on socket 1 00:11:47.355 EAL: Detected lcore 62 as core 26 on socket 1 00:11:47.355 EAL: Detected lcore 63 as core 27 on socket 1 00:11:47.355 EAL: Detected lcore 64 as core 28 on socket 1 00:11:47.355 EAL: Detected lcore 65 as core 29 on socket 1 00:11:47.355 EAL: Detected lcore 66 as core 30 on socket 1 00:11:47.355 EAL: Detected lcore 67 as core 31 on socket 1 00:11:47.355 EAL: Detected lcore 68 as core 32 on socket 1 00:11:47.355 EAL: Detected lcore 69 as core 33 on socket 1 00:11:47.355 EAL: Detected lcore 70 as core 34 on socket 1 00:11:47.355 EAL: Detected lcore 71 as core 35 on socket 1 00:11:47.355 EAL: Detected lcore 72 as core 0 on socket 0 00:11:47.355 EAL: Detected lcore 73 as core 1 on socket 0 00:11:47.355 EAL: Detected lcore 74 as core 2 on socket 0 00:11:47.355 EAL: Detected lcore 75 as core 3 on socket 0 00:11:47.355 EAL: Detected lcore 76 as core 4 on socket 0 00:11:47.355 EAL: Detected lcore 77 as core 5 on socket 0 00:11:47.355 EAL: Detected lcore 78 as core 6 on socket 0 00:11:47.355 EAL: Detected lcore 79 as core 7 on socket 0 00:11:47.355 EAL: Detected lcore 80 as core 8 on socket 0 00:11:47.355 EAL: Detected lcore 81 as core 9 on socket 0 00:11:47.355 EAL: Detected lcore 82 as core 10 on socket 0 00:11:47.355 EAL: Detected lcore 83 as core 11 on socket 0 00:11:47.355 EAL: Detected lcore 84 as core 12 on socket 0 00:11:47.355 EAL: Detected lcore 85 as core 13 on socket 0 00:11:47.355 EAL: Detected lcore 86 as core 14 on socket 0 00:11:47.355 EAL: Detected lcore 87 as core 15 on socket 0 00:11:47.355 EAL: Detected lcore 88 as core 16 on socket 0 00:11:47.355 EAL: Detected lcore 89 as core 17 on socket 0 00:11:47.355 EAL: Detected lcore 90 as core 18 on socket 0 00:11:47.355 EAL: Detected lcore 91 as core 19 on socket 0 00:11:47.355 EAL: Detected lcore 92 as core 20 on socket 0 00:11:47.355 EAL: Detected lcore 93 as core 21 on socket 0 00:11:47.355 EAL: Detected lcore 94 as core 22 on socket 0 00:11:47.355 EAL: Detected lcore 95 as core 23 on socket 0 00:11:47.355 EAL: Detected lcore 96 as core 24 on socket 0 00:11:47.355 EAL: Detected lcore 97 as core 25 on socket 0 00:11:47.355 EAL: Detected lcore 98 as core 26 on socket 0 00:11:47.355 EAL: Detected lcore 99 as core 27 on socket 0 00:11:47.355 EAL: Detected lcore 100 as core 28 on socket 0 00:11:47.355 EAL: Detected lcore 101 as core 29 on socket 0 00:11:47.355 EAL: Detected lcore 102 as core 30 on socket 0 00:11:47.355 EAL: Detected lcore 103 as core 31 on socket 0 00:11:47.355 EAL: Detected lcore 104 as core 32 on socket 0 00:11:47.355 EAL: Detected lcore 105 as core 33 on socket 0 00:11:47.355 EAL: Detected lcore 106 as core 34 on socket 0 00:11:47.355 EAL: Detected lcore 107 as core 35 on socket 0 00:11:47.355 EAL: Detected lcore 108 as core 0 on socket 1 00:11:47.355 EAL: Detected lcore 109 as core 1 on socket 1 00:11:47.355 EAL: Detected lcore 110 as core 2 on socket 1 00:11:47.355 EAL: Detected lcore 111 as core 3 on socket 1 00:11:47.355 EAL: Detected lcore 112 as core 4 on socket 1 00:11:47.355 EAL: Detected lcore 113 as core 5 on socket 1 00:11:47.355 EAL: Detected lcore 114 as core 6 on socket 1 00:11:47.355 EAL: Detected lcore 115 as core 7 on socket 1 00:11:47.355 EAL: Detected lcore 116 as core 8 on socket 1 00:11:47.355 EAL: Detected lcore 117 as core 9 on socket 1 00:11:47.355 EAL: Detected lcore 118 as core 10 on socket 1 00:11:47.355 EAL: Detected lcore 119 as core 11 on socket 1 00:11:47.355 EAL: Detected lcore 120 as core 12 on socket 1 00:11:47.355 EAL: Detected lcore 121 as core 13 on socket 1 00:11:47.355 EAL: Detected lcore 122 as core 14 on socket 1 00:11:47.355 EAL: Detected lcore 123 as core 15 on socket 1 00:11:47.355 EAL: Detected lcore 124 as core 16 on socket 1 00:11:47.355 EAL: Detected lcore 125 as core 17 on socket 1 00:11:47.355 EAL: Detected lcore 126 as core 18 on socket 1 00:11:47.355 EAL: Detected lcore 127 as core 19 on socket 1 00:11:47.355 EAL: Skipped lcore 128 as core 20 on socket 1 00:11:47.355 EAL: Skipped lcore 129 as core 21 on socket 1 00:11:47.355 EAL: Skipped lcore 130 as core 22 on socket 1 00:11:47.355 EAL: Skipped lcore 131 as core 23 on socket 1 00:11:47.355 EAL: Skipped lcore 132 as core 24 on socket 1 00:11:47.355 EAL: Skipped lcore 133 as core 25 on socket 1 00:11:47.355 EAL: Skipped lcore 134 as core 26 on socket 1 00:11:47.355 EAL: Skipped lcore 135 as core 27 on socket 1 00:11:47.355 EAL: Skipped lcore 136 as core 28 on socket 1 00:11:47.355 EAL: Skipped lcore 137 as core 29 on socket 1 00:11:47.355 EAL: Skipped lcore 138 as core 30 on socket 1 00:11:47.355 EAL: Skipped lcore 139 as core 31 on socket 1 00:11:47.355 EAL: Skipped lcore 140 as core 32 on socket 1 00:11:47.355 EAL: Skipped lcore 141 as core 33 on socket 1 00:11:47.355 EAL: Skipped lcore 142 as core 34 on socket 1 00:11:47.355 EAL: Skipped lcore 143 as core 35 on socket 1 00:11:47.355 EAL: Maximum logical cores by configuration: 128 00:11:47.355 EAL: Detected CPU lcores: 128 00:11:47.355 EAL: Detected NUMA nodes: 2 00:11:47.355 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:11:47.355 EAL: Detected shared linkage of DPDK 00:11:47.355 EAL: No shared files mode enabled, IPC will be disabled 00:11:47.355 EAL: Bus pci wants IOVA as 'DC' 00:11:47.355 EAL: Buses did not request a specific IOVA mode. 00:11:47.355 EAL: IOMMU is available, selecting IOVA as VA mode. 00:11:47.355 EAL: Selected IOVA mode 'VA' 00:11:47.355 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.355 EAL: Probing VFIO support... 00:11:47.355 EAL: IOMMU type 1 (Type 1) is supported 00:11:47.355 EAL: IOMMU type 7 (sPAPR) is not supported 00:11:47.355 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:11:47.355 EAL: VFIO support initialized 00:11:47.355 EAL: Ask a virtual area of 0x2e000 bytes 00:11:47.355 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:11:47.355 EAL: Setting up physically contiguous memory... 00:11:47.355 EAL: Setting maximum number of open files to 524288 00:11:47.355 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:11:47.355 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:11:47.355 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:11:47.355 EAL: Ask a virtual area of 0x61000 bytes 00:11:47.355 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:11:47.355 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:47.355 EAL: Ask a virtual area of 0x400000000 bytes 00:11:47.355 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:11:47.355 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:11:47.355 EAL: Ask a virtual area of 0x61000 bytes 00:11:47.355 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:11:47.355 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:47.355 EAL: Ask a virtual area of 0x400000000 bytes 00:11:47.355 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:11:47.355 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:11:47.355 EAL: Ask a virtual area of 0x61000 bytes 00:11:47.355 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:11:47.355 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:47.355 EAL: Ask a virtual area of 0x400000000 bytes 00:11:47.355 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:11:47.355 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:11:47.355 EAL: Ask a virtual area of 0x61000 bytes 00:11:47.355 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:11:47.355 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:47.355 EAL: Ask a virtual area of 0x400000000 bytes 00:11:47.355 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:11:47.355 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:11:47.355 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:11:47.355 EAL: Ask a virtual area of 0x61000 bytes 00:11:47.355 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:11:47.356 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:11:47.356 EAL: Ask a virtual area of 0x400000000 bytes 00:11:47.356 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:11:47.356 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:11:47.356 EAL: Ask a virtual area of 0x61000 bytes 00:11:47.356 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:11:47.356 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:11:47.356 EAL: Ask a virtual area of 0x400000000 bytes 00:11:47.356 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:11:47.356 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:11:47.356 EAL: Ask a virtual area of 0x61000 bytes 00:11:47.356 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:11:47.356 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:11:47.356 EAL: Ask a virtual area of 0x400000000 bytes 00:11:47.356 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:11:47.356 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:11:47.356 EAL: Ask a virtual area of 0x61000 bytes 00:11:47.356 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:11:47.356 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:11:47.356 EAL: Ask a virtual area of 0x400000000 bytes 00:11:47.356 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:11:47.356 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:11:47.356 EAL: Hugepages will be freed exactly as allocated. 00:11:47.356 EAL: No shared files mode enabled, IPC is disabled 00:11:47.356 EAL: No shared files mode enabled, IPC is disabled 00:11:47.356 EAL: TSC frequency is ~2400000 KHz 00:11:47.356 EAL: Main lcore 0 is ready (tid=7f701b9b9a00;cpuset=[0]) 00:11:47.356 EAL: Trying to obtain current memory policy. 00:11:47.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:47.356 EAL: Restoring previous memory policy: 0 00:11:47.356 EAL: request: mp_malloc_sync 00:11:47.356 EAL: No shared files mode enabled, IPC is disabled 00:11:47.356 EAL: Heap on socket 0 was expanded by 2MB 00:11:47.356 EAL: No shared files mode enabled, IPC is disabled 00:11:47.356 EAL: No PCI address specified using 'addr=' in: bus=pci 00:11:47.356 EAL: Mem event callback 'spdk:(nil)' registered 00:11:47.356 00:11:47.356 00:11:47.356 CUnit - A unit testing framework for C - Version 2.1-3 00:11:47.356 http://cunit.sourceforge.net/ 00:11:47.356 00:11:47.356 00:11:47.356 Suite: components_suite 00:11:47.356 Test: vtophys_malloc_test ...passed 00:11:47.356 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:11:47.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:47.356 EAL: Restoring previous memory policy: 4 00:11:47.356 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.356 EAL: request: mp_malloc_sync 00:11:47.356 EAL: No shared files mode enabled, IPC is disabled 00:11:47.356 EAL: Heap on socket 0 was expanded by 4MB 00:11:47.356 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.356 EAL: request: mp_malloc_sync 00:11:47.356 EAL: No shared files mode enabled, IPC is disabled 00:11:47.356 EAL: Heap on socket 0 was shrunk by 4MB 00:11:47.356 EAL: Trying to obtain current memory policy. 00:11:47.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:47.356 EAL: Restoring previous memory policy: 4 00:11:47.356 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.356 EAL: request: mp_malloc_sync 00:11:47.356 EAL: No shared files mode enabled, IPC is disabled 00:11:47.356 EAL: Heap on socket 0 was expanded by 6MB 00:11:47.356 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.356 EAL: request: mp_malloc_sync 00:11:47.356 EAL: No shared files mode enabled, IPC is disabled 00:11:47.356 EAL: Heap on socket 0 was shrunk by 6MB 00:11:47.356 EAL: Trying to obtain current memory policy. 00:11:47.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:47.356 EAL: Restoring previous memory policy: 4 00:11:47.356 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.356 EAL: request: mp_malloc_sync 00:11:47.356 EAL: No shared files mode enabled, IPC is disabled 00:11:47.356 EAL: Heap on socket 0 was expanded by 10MB 00:11:47.356 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.356 EAL: request: mp_malloc_sync 00:11:47.356 EAL: No shared files mode enabled, IPC is disabled 00:11:47.356 EAL: Heap on socket 0 was shrunk by 10MB 00:11:47.356 EAL: Trying to obtain current memory policy. 00:11:47.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:47.356 EAL: Restoring previous memory policy: 4 00:11:47.356 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.356 EAL: request: mp_malloc_sync 00:11:47.356 EAL: No shared files mode enabled, IPC is disabled 00:11:47.356 EAL: Heap on socket 0 was expanded by 18MB 00:11:47.356 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.356 EAL: request: mp_malloc_sync 00:11:47.356 EAL: No shared files mode enabled, IPC is disabled 00:11:47.356 EAL: Heap on socket 0 was shrunk by 18MB 00:11:47.356 EAL: Trying to obtain current memory policy. 00:11:47.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:47.356 EAL: Restoring previous memory policy: 4 00:11:47.356 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.356 EAL: request: mp_malloc_sync 00:11:47.356 EAL: No shared files mode enabled, IPC is disabled 00:11:47.356 EAL: Heap on socket 0 was expanded by 34MB 00:11:47.356 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.356 EAL: request: mp_malloc_sync 00:11:47.356 EAL: No shared files mode enabled, IPC is disabled 00:11:47.356 EAL: Heap on socket 0 was shrunk by 34MB 00:11:47.356 EAL: Trying to obtain current memory policy. 00:11:47.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:47.356 EAL: Restoring previous memory policy: 4 00:11:47.356 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.356 EAL: request: mp_malloc_sync 00:11:47.356 EAL: No shared files mode enabled, IPC is disabled 00:11:47.356 EAL: Heap on socket 0 was expanded by 66MB 00:11:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.617 EAL: request: mp_malloc_sync 00:11:47.617 EAL: No shared files mode enabled, IPC is disabled 00:11:47.617 EAL: Heap on socket 0 was shrunk by 66MB 00:11:47.617 EAL: Trying to obtain current memory policy. 00:11:47.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:47.617 EAL: Restoring previous memory policy: 4 00:11:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.617 EAL: request: mp_malloc_sync 00:11:47.617 EAL: No shared files mode enabled, IPC is disabled 00:11:47.617 EAL: Heap on socket 0 was expanded by 130MB 00:11:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.617 EAL: request: mp_malloc_sync 00:11:47.617 EAL: No shared files mode enabled, IPC is disabled 00:11:47.617 EAL: Heap on socket 0 was shrunk by 130MB 00:11:47.617 EAL: Trying to obtain current memory policy. 00:11:47.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:47.617 EAL: Restoring previous memory policy: 4 00:11:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.617 EAL: request: mp_malloc_sync 00:11:47.617 EAL: No shared files mode enabled, IPC is disabled 00:11:47.617 EAL: Heap on socket 0 was expanded by 258MB 00:11:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.617 EAL: request: mp_malloc_sync 00:11:47.617 EAL: No shared files mode enabled, IPC is disabled 00:11:47.617 EAL: Heap on socket 0 was shrunk by 258MB 00:11:47.617 EAL: Trying to obtain current memory policy. 00:11:47.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:47.617 EAL: Restoring previous memory policy: 4 00:11:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.617 EAL: request: mp_malloc_sync 00:11:47.617 EAL: No shared files mode enabled, IPC is disabled 00:11:47.617 EAL: Heap on socket 0 was expanded by 514MB 00:11:47.878 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.878 EAL: request: mp_malloc_sync 00:11:47.878 EAL: No shared files mode enabled, IPC is disabled 00:11:47.878 EAL: Heap on socket 0 was shrunk by 514MB 00:11:47.878 EAL: Trying to obtain current memory policy. 00:11:47.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:47.878 EAL: Restoring previous memory policy: 4 00:11:47.878 EAL: Calling mem event callback 'spdk:(nil)' 00:11:47.878 EAL: request: mp_malloc_sync 00:11:47.878 EAL: No shared files mode enabled, IPC is disabled 00:11:47.878 EAL: Heap on socket 0 was expanded by 1026MB 00:11:48.139 EAL: Calling mem event callback 'spdk:(nil)' 00:11:48.139 EAL: request: mp_malloc_sync 00:11:48.139 EAL: No shared files mode enabled, IPC is disabled 00:11:48.139 EAL: Heap on socket 0 was shrunk by 1026MB 00:11:48.139 passed 00:11:48.139 00:11:48.139 Run Summary: Type Total Ran Passed Failed Inactive 00:11:48.139 suites 1 1 n/a 0 0 00:11:48.139 tests 2 2 2 0 0 00:11:48.139 asserts 497 497 497 0 n/a 00:11:48.139 00:11:48.139 Elapsed time = 0.688 seconds 00:11:48.139 EAL: Calling mem event callback 'spdk:(nil)' 00:11:48.139 EAL: request: mp_malloc_sync 00:11:48.139 EAL: No shared files mode enabled, IPC is disabled 00:11:48.139 EAL: Heap on socket 0 was shrunk by 2MB 00:11:48.139 EAL: No shared files mode enabled, IPC is disabled 00:11:48.139 EAL: No shared files mode enabled, IPC is disabled 00:11:48.139 EAL: No shared files mode enabled, IPC is disabled 00:11:48.139 00:11:48.139 real 0m0.826s 00:11:48.139 user 0m0.434s 00:11:48.139 sys 0m0.364s 00:11:48.139 09:24:41 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:48.139 09:24:41 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:11:48.139 ************************************ 00:11:48.139 END TEST env_vtophys 00:11:48.139 ************************************ 00:11:48.139 09:24:41 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:11:48.139 09:24:41 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:48.139 09:24:41 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:48.139 09:24:41 env -- common/autotest_common.sh@10 -- # set +x 00:11:48.139 ************************************ 00:11:48.139 START TEST env_pci 00:11:48.139 ************************************ 00:11:48.139 09:24:41 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:11:48.139 00:11:48.139 00:11:48.139 CUnit - A unit testing framework for C - Version 2.1-3 00:11:48.139 http://cunit.sourceforge.net/ 00:11:48.139 00:11:48.139 00:11:48.139 Suite: pci 00:11:48.139 Test: pci_hook ...[2024-05-16 09:24:41.687463] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 128418 has claimed it 00:11:48.401 EAL: Cannot find device (10000:00:01.0) 00:11:48.401 EAL: Failed to attach device on primary process 00:11:48.401 passed 00:11:48.401 00:11:48.401 Run Summary: Type Total Ran Passed Failed Inactive 00:11:48.401 suites 1 1 n/a 0 0 00:11:48.401 tests 1 1 1 0 0 00:11:48.401 asserts 25 25 25 0 n/a 00:11:48.401 00:11:48.401 Elapsed time = 0.030 seconds 00:11:48.401 00:11:48.401 real 0m0.051s 00:11:48.401 user 0m0.015s 00:11:48.401 sys 0m0.036s 00:11:48.401 09:24:41 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:48.401 09:24:41 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:11:48.401 ************************************ 00:11:48.401 END TEST env_pci 00:11:48.401 ************************************ 00:11:48.401 09:24:41 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:11:48.401 09:24:41 env -- env/env.sh@15 -- # uname 00:11:48.401 09:24:41 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:11:48.401 09:24:41 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:11:48.401 09:24:41 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:48.401 09:24:41 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:48.401 09:24:41 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:48.401 09:24:41 env -- common/autotest_common.sh@10 -- # set +x 00:11:48.401 ************************************ 00:11:48.401 START TEST env_dpdk_post_init 00:11:48.401 ************************************ 00:11:48.401 09:24:41 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:48.401 EAL: Detected CPU lcores: 128 00:11:48.401 EAL: Detected NUMA nodes: 2 00:11:48.401 EAL: Detected shared linkage of DPDK 00:11:48.401 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:48.401 EAL: Selected IOVA mode 'VA' 00:11:48.401 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.401 EAL: VFIO support initialized 00:11:48.401 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:48.401 EAL: Using IOMMU type 1 (Type 1) 00:11:48.663 EAL: Ignore mapping IO port bar(1) 00:11:48.663 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:11:48.924 EAL: Ignore mapping IO port bar(1) 00:11:48.924 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:11:49.185 EAL: Ignore mapping IO port bar(1) 00:11:49.185 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:11:49.185 EAL: Ignore mapping IO port bar(1) 00:11:49.445 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:11:49.445 EAL: Ignore mapping IO port bar(1) 00:11:49.707 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:11:49.707 EAL: Ignore mapping IO port bar(1) 00:11:49.967 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:11:49.967 EAL: Ignore mapping IO port bar(1) 00:11:49.967 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:11:50.229 EAL: Ignore mapping IO port bar(1) 00:11:50.229 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:11:50.489 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:11:50.750 EAL: Ignore mapping IO port bar(1) 00:11:50.751 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:11:50.751 EAL: Ignore mapping IO port bar(1) 00:11:51.012 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:11:51.012 EAL: Ignore mapping IO port bar(1) 00:11:51.273 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:11:51.273 EAL: Ignore mapping IO port bar(1) 00:11:51.534 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:11:51.534 EAL: Ignore mapping IO port bar(1) 00:11:51.534 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:11:51.795 EAL: Ignore mapping IO port bar(1) 00:11:51.795 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:11:52.056 EAL: Ignore mapping IO port bar(1) 00:11:52.056 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:11:52.317 EAL: Ignore mapping IO port bar(1) 00:11:52.317 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:11:52.317 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:11:52.317 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:11:52.317 Starting DPDK initialization... 00:11:52.317 Starting SPDK post initialization... 00:11:52.317 SPDK NVMe probe 00:11:52.317 Attaching to 0000:65:00.0 00:11:52.317 Attached to 0000:65:00.0 00:11:52.317 Cleaning up... 00:11:54.236 00:11:54.236 real 0m5.734s 00:11:54.236 user 0m0.183s 00:11:54.236 sys 0m0.104s 00:11:54.236 09:24:47 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:54.236 09:24:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:11:54.236 ************************************ 00:11:54.236 END TEST env_dpdk_post_init 00:11:54.236 ************************************ 00:11:54.236 09:24:47 env -- env/env.sh@26 -- # uname 00:11:54.236 09:24:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:11:54.236 09:24:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:11:54.236 09:24:47 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:54.236 09:24:47 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:54.236 09:24:47 env -- common/autotest_common.sh@10 -- # set +x 00:11:54.236 ************************************ 00:11:54.236 START TEST env_mem_callbacks 00:11:54.236 ************************************ 00:11:54.236 09:24:47 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:11:54.236 EAL: Detected CPU lcores: 128 00:11:54.236 EAL: Detected NUMA nodes: 2 00:11:54.236 EAL: Detected shared linkage of DPDK 00:11:54.236 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:54.236 EAL: Selected IOVA mode 'VA' 00:11:54.236 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.236 EAL: VFIO support initialized 00:11:54.236 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:54.236 00:11:54.236 00:11:54.236 CUnit - A unit testing framework for C - Version 2.1-3 00:11:54.236 http://cunit.sourceforge.net/ 00:11:54.236 00:11:54.236 00:11:54.236 Suite: memory 00:11:54.236 Test: test ... 00:11:54.236 register 0x200000200000 2097152 00:11:54.236 malloc 3145728 00:11:54.236 register 0x200000400000 4194304 00:11:54.236 buf 0x200000500000 len 3145728 PASSED 00:11:54.236 malloc 64 00:11:54.236 buf 0x2000004fff40 len 64 PASSED 00:11:54.236 malloc 4194304 00:11:54.236 register 0x200000800000 6291456 00:11:54.236 buf 0x200000a00000 len 4194304 PASSED 00:11:54.236 free 0x200000500000 3145728 00:11:54.236 free 0x2000004fff40 64 00:11:54.236 unregister 0x200000400000 4194304 PASSED 00:11:54.236 free 0x200000a00000 4194304 00:11:54.236 unregister 0x200000800000 6291456 PASSED 00:11:54.236 malloc 8388608 00:11:54.236 register 0x200000400000 10485760 00:11:54.236 buf 0x200000600000 len 8388608 PASSED 00:11:54.236 free 0x200000600000 8388608 00:11:54.236 unregister 0x200000400000 10485760 PASSED 00:11:54.236 passed 00:11:54.236 00:11:54.236 Run Summary: Type Total Ran Passed Failed Inactive 00:11:54.236 suites 1 1 n/a 0 0 00:11:54.236 tests 1 1 1 0 0 00:11:54.236 asserts 15 15 15 0 n/a 00:11:54.236 00:11:54.236 Elapsed time = 0.010 seconds 00:11:54.236 00:11:54.236 real 0m0.068s 00:11:54.236 user 0m0.021s 00:11:54.236 sys 0m0.047s 00:11:54.236 09:24:47 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:54.236 09:24:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:11:54.236 ************************************ 00:11:54.236 END TEST env_mem_callbacks 00:11:54.236 ************************************ 00:11:54.236 00:11:54.236 real 0m7.422s 00:11:54.236 user 0m1.059s 00:11:54.236 sys 0m0.902s 00:11:54.236 09:24:47 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:54.236 09:24:47 env -- common/autotest_common.sh@10 -- # set +x 00:11:54.236 ************************************ 00:11:54.236 END TEST env 00:11:54.236 ************************************ 00:11:54.236 09:24:47 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:11:54.236 09:24:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:54.236 09:24:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:54.236 09:24:47 -- common/autotest_common.sh@10 -- # set +x 00:11:54.498 ************************************ 00:11:54.498 START TEST rpc 00:11:54.498 ************************************ 00:11:54.498 09:24:47 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:11:54.498 * Looking for test storage... 00:11:54.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:11:54.498 09:24:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=129858 00:11:54.498 09:24:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:54.498 09:24:47 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:11:54.498 09:24:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 129858 00:11:54.498 09:24:47 rpc -- common/autotest_common.sh@827 -- # '[' -z 129858 ']' 00:11:54.498 09:24:47 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.498 09:24:47 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:54.498 09:24:47 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.498 09:24:47 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:54.498 09:24:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.498 [2024-05-16 09:24:48.015988] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:11:54.498 [2024-05-16 09:24:48.016065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129858 ] 00:11:54.498 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.760 [2024-05-16 09:24:48.096010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.760 [2024-05-16 09:24:48.195072] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:11:54.760 [2024-05-16 09:24:48.195132] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 129858' to capture a snapshot of events at runtime. 00:11:54.760 [2024-05-16 09:24:48.195141] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.760 [2024-05-16 09:24:48.195154] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.760 [2024-05-16 09:24:48.195160] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid129858 for offline analysis/debug. 00:11:54.760 [2024-05-16 09:24:48.195188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.333 09:24:48 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:55.333 09:24:48 rpc -- common/autotest_common.sh@860 -- # return 0 00:11:55.333 09:24:48 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:11:55.333 09:24:48 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:11:55.333 09:24:48 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:11:55.333 09:24:48 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:11:55.333 09:24:48 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:55.333 09:24:48 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:55.333 09:24:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.333 ************************************ 00:11:55.333 START TEST rpc_integrity 00:11:55.333 ************************************ 00:11:55.333 09:24:48 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:11:55.333 09:24:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.333 09:24:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.333 09:24:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:55.333 09:24:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.333 09:24:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:55.333 09:24:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:55.333 09:24:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:55.333 09:24:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:55.333 09:24:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.333 09:24:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:55.596 09:24:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.596 09:24:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:11:55.596 09:24:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:55.596 09:24:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.596 09:24:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:55.596 09:24:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.596 09:24:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:55.596 { 00:11:55.596 "name": "Malloc0", 00:11:55.596 "aliases": [ 00:11:55.596 "39b72338-56a6-4f4b-81ce-5ca1043ce503" 00:11:55.596 ], 00:11:55.596 "product_name": "Malloc disk", 00:11:55.596 "block_size": 512, 00:11:55.596 "num_blocks": 16384, 00:11:55.596 "uuid": "39b72338-56a6-4f4b-81ce-5ca1043ce503", 00:11:55.596 "assigned_rate_limits": { 00:11:55.596 "rw_ios_per_sec": 0, 00:11:55.596 "rw_mbytes_per_sec": 0, 00:11:55.596 "r_mbytes_per_sec": 0, 00:11:55.596 "w_mbytes_per_sec": 0 00:11:55.596 }, 00:11:55.596 "claimed": false, 00:11:55.596 "zoned": false, 00:11:55.596 "supported_io_types": { 00:11:55.596 "read": true, 00:11:55.596 "write": true, 00:11:55.596 "unmap": true, 00:11:55.596 "write_zeroes": true, 00:11:55.596 "flush": true, 00:11:55.596 "reset": true, 00:11:55.596 "compare": false, 00:11:55.596 "compare_and_write": false, 00:11:55.596 "abort": true, 00:11:55.596 "nvme_admin": false, 00:11:55.596 "nvme_io": false 00:11:55.596 }, 00:11:55.596 "memory_domains": [ 00:11:55.596 { 00:11:55.596 "dma_device_id": "system", 00:11:55.596 "dma_device_type": 1 00:11:55.596 }, 00:11:55.596 { 00:11:55.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.596 "dma_device_type": 2 00:11:55.596 } 00:11:55.596 ], 00:11:55.596 "driver_specific": {} 00:11:55.596 } 00:11:55.596 ]' 00:11:55.596 09:24:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:55.596 09:24:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:55.596 09:24:48 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:11:55.596 09:24:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.596 09:24:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:55.596 [2024-05-16 09:24:48.963903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:11:55.596 [2024-05-16 09:24:48.963953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.596 [2024-05-16 09:24:48.963968] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1303b70 00:11:55.596 [2024-05-16 09:24:48.963976] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.596 [2024-05-16 09:24:48.965550] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.596 [2024-05-16 09:24:48.965588] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:55.596 Passthru0 00:11:55.596 09:24:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.596 09:24:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:55.596 09:24:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.596 09:24:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:55.596 09:24:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.596 09:24:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:55.596 { 00:11:55.596 "name": "Malloc0", 00:11:55.596 "aliases": [ 00:11:55.596 "39b72338-56a6-4f4b-81ce-5ca1043ce503" 00:11:55.596 ], 00:11:55.596 "product_name": "Malloc disk", 00:11:55.596 "block_size": 512, 00:11:55.596 "num_blocks": 16384, 00:11:55.596 "uuid": "39b72338-56a6-4f4b-81ce-5ca1043ce503", 00:11:55.596 "assigned_rate_limits": { 00:11:55.596 "rw_ios_per_sec": 0, 00:11:55.596 "rw_mbytes_per_sec": 0, 00:11:55.596 "r_mbytes_per_sec": 0, 00:11:55.596 "w_mbytes_per_sec": 0 00:11:55.596 }, 00:11:55.596 "claimed": true, 00:11:55.596 "claim_type": "exclusive_write", 00:11:55.596 "zoned": false, 00:11:55.596 "supported_io_types": { 00:11:55.596 "read": true, 00:11:55.596 "write": true, 00:11:55.596 "unmap": true, 00:11:55.596 "write_zeroes": true, 00:11:55.596 "flush": true, 00:11:55.596 "reset": true, 00:11:55.596 "compare": false, 00:11:55.596 "compare_and_write": false, 00:11:55.596 "abort": true, 00:11:55.596 "nvme_admin": false, 00:11:55.596 "nvme_io": false 00:11:55.596 }, 00:11:55.596 "memory_domains": [ 00:11:55.596 { 00:11:55.596 "dma_device_id": "system", 00:11:55.596 "dma_device_type": 1 00:11:55.596 }, 00:11:55.596 { 00:11:55.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.596 "dma_device_type": 2 00:11:55.596 } 00:11:55.596 ], 00:11:55.596 "driver_specific": {} 00:11:55.596 }, 00:11:55.596 { 00:11:55.596 "name": "Passthru0", 00:11:55.596 "aliases": [ 00:11:55.596 "3fec8406-2bca-5f86-bbe8-af98dc380f35" 00:11:55.596 ], 00:11:55.596 "product_name": "passthru", 00:11:55.596 "block_size": 512, 00:11:55.596 "num_blocks": 16384, 00:11:55.596 "uuid": "3fec8406-2bca-5f86-bbe8-af98dc380f35", 00:11:55.596 "assigned_rate_limits": { 00:11:55.596 "rw_ios_per_sec": 0, 00:11:55.596 "rw_mbytes_per_sec": 0, 00:11:55.596 "r_mbytes_per_sec": 0, 00:11:55.596 "w_mbytes_per_sec": 0 00:11:55.596 }, 00:11:55.596 "claimed": false, 00:11:55.596 "zoned": false, 00:11:55.596 "supported_io_types": { 00:11:55.596 "read": true, 00:11:55.596 "write": true, 00:11:55.596 "unmap": true, 00:11:55.596 "write_zeroes": true, 00:11:55.596 "flush": true, 00:11:55.596 "reset": true, 00:11:55.596 "compare": false, 00:11:55.596 "compare_and_write": false, 00:11:55.596 "abort": true, 00:11:55.596 "nvme_admin": false, 00:11:55.596 "nvme_io": false 00:11:55.596 }, 00:11:55.596 "memory_domains": [ 00:11:55.596 { 00:11:55.596 "dma_device_id": "system", 00:11:55.596 "dma_device_type": 1 00:11:55.596 }, 00:11:55.596 { 00:11:55.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.596 "dma_device_type": 2 00:11:55.596 } 00:11:55.596 ], 00:11:55.596 "driver_specific": { 00:11:55.596 "passthru": { 00:11:55.596 "name": "Passthru0", 00:11:55.596 "base_bdev_name": "Malloc0" 00:11:55.596 } 00:11:55.596 } 00:11:55.596 } 00:11:55.596 ]' 00:11:55.596 09:24:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:55.596 09:24:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:55.596 09:24:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:55.596 09:24:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.596 09:24:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:55.596 09:24:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.596 09:24:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:11:55.596 09:24:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.596 09:24:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:55.596 09:24:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.596 09:24:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:55.596 09:24:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.596 09:24:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:55.596 09:24:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.596 09:24:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:55.596 09:24:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:55.597 09:24:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:55.597 00:11:55.597 real 0m0.301s 00:11:55.597 user 0m0.187s 00:11:55.597 sys 0m0.045s 00:11:55.597 09:24:49 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:55.597 09:24:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:55.597 ************************************ 00:11:55.597 END TEST rpc_integrity 00:11:55.597 ************************************ 00:11:55.858 09:24:49 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:11:55.858 09:24:49 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:55.858 09:24:49 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:55.858 09:24:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.858 ************************************ 00:11:55.859 START TEST rpc_plugins 00:11:55.859 ************************************ 00:11:55.859 09:24:49 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:11:55.859 09:24:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:11:55.859 09:24:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.859 09:24:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:55.859 09:24:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.859 09:24:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:11:55.859 09:24:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:11:55.859 09:24:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.859 09:24:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:55.859 09:24:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.859 09:24:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:11:55.859 { 00:11:55.859 "name": "Malloc1", 00:11:55.859 "aliases": [ 00:11:55.859 "c2579865-30cd-419b-b57b-b6b2d199a1c1" 00:11:55.859 ], 00:11:55.859 "product_name": "Malloc disk", 00:11:55.859 "block_size": 4096, 00:11:55.859 "num_blocks": 256, 00:11:55.859 "uuid": "c2579865-30cd-419b-b57b-b6b2d199a1c1", 00:11:55.859 "assigned_rate_limits": { 00:11:55.859 "rw_ios_per_sec": 0, 00:11:55.859 "rw_mbytes_per_sec": 0, 00:11:55.859 "r_mbytes_per_sec": 0, 00:11:55.859 "w_mbytes_per_sec": 0 00:11:55.859 }, 00:11:55.859 "claimed": false, 00:11:55.859 "zoned": false, 00:11:55.859 "supported_io_types": { 00:11:55.859 "read": true, 00:11:55.859 "write": true, 00:11:55.859 "unmap": true, 00:11:55.859 "write_zeroes": true, 00:11:55.859 "flush": true, 00:11:55.859 "reset": true, 00:11:55.859 "compare": false, 00:11:55.859 "compare_and_write": false, 00:11:55.859 "abort": true, 00:11:55.859 "nvme_admin": false, 00:11:55.859 "nvme_io": false 00:11:55.859 }, 00:11:55.859 "memory_domains": [ 00:11:55.859 { 00:11:55.859 "dma_device_id": "system", 00:11:55.859 "dma_device_type": 1 00:11:55.859 }, 00:11:55.859 { 00:11:55.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.859 "dma_device_type": 2 00:11:55.859 } 00:11:55.859 ], 00:11:55.859 "driver_specific": {} 00:11:55.859 } 00:11:55.859 ]' 00:11:55.859 09:24:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:11:55.859 09:24:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:11:55.859 09:24:49 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:11:55.859 09:24:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.859 09:24:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:55.859 09:24:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.859 09:24:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:11:55.859 09:24:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.859 09:24:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:55.859 09:24:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.859 09:24:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:11:55.859 09:24:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:11:55.859 09:24:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:11:55.859 00:11:55.859 real 0m0.152s 00:11:55.859 user 0m0.096s 00:11:55.859 sys 0m0.019s 00:11:55.859 09:24:49 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:55.859 09:24:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:55.859 ************************************ 00:11:55.859 END TEST rpc_plugins 00:11:55.859 ************************************ 00:11:55.859 09:24:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:11:55.859 09:24:49 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:55.859 09:24:49 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:55.859 09:24:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.121 ************************************ 00:11:56.121 START TEST rpc_trace_cmd_test 00:11:56.121 ************************************ 00:11:56.121 09:24:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:11:56.121 09:24:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:11:56.121 09:24:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:11:56.121 09:24:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.121 09:24:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.121 09:24:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.121 09:24:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:11:56.121 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid129858", 00:11:56.121 "tpoint_group_mask": "0x8", 00:11:56.121 "iscsi_conn": { 00:11:56.121 "mask": "0x2", 00:11:56.121 "tpoint_mask": "0x0" 00:11:56.121 }, 00:11:56.121 "scsi": { 00:11:56.121 "mask": "0x4", 00:11:56.121 "tpoint_mask": "0x0" 00:11:56.121 }, 00:11:56.121 "bdev": { 00:11:56.121 "mask": "0x8", 00:11:56.121 "tpoint_mask": "0xffffffffffffffff" 00:11:56.121 }, 00:11:56.121 "nvmf_rdma": { 00:11:56.121 "mask": "0x10", 00:11:56.121 "tpoint_mask": "0x0" 00:11:56.121 }, 00:11:56.121 "nvmf_tcp": { 00:11:56.121 "mask": "0x20", 00:11:56.121 "tpoint_mask": "0x0" 00:11:56.121 }, 00:11:56.121 "ftl": { 00:11:56.121 "mask": "0x40", 00:11:56.121 "tpoint_mask": "0x0" 00:11:56.121 }, 00:11:56.121 "blobfs": { 00:11:56.121 "mask": "0x80", 00:11:56.121 "tpoint_mask": "0x0" 00:11:56.121 }, 00:11:56.121 "dsa": { 00:11:56.121 "mask": "0x200", 00:11:56.121 "tpoint_mask": "0x0" 00:11:56.121 }, 00:11:56.121 "thread": { 00:11:56.121 "mask": "0x400", 00:11:56.121 "tpoint_mask": "0x0" 00:11:56.121 }, 00:11:56.121 "nvme_pcie": { 00:11:56.121 "mask": "0x800", 00:11:56.121 "tpoint_mask": "0x0" 00:11:56.121 }, 00:11:56.121 "iaa": { 00:11:56.121 "mask": "0x1000", 00:11:56.121 "tpoint_mask": "0x0" 00:11:56.121 }, 00:11:56.121 "nvme_tcp": { 00:11:56.121 "mask": "0x2000", 00:11:56.121 "tpoint_mask": "0x0" 00:11:56.121 }, 00:11:56.121 "bdev_nvme": { 00:11:56.121 "mask": "0x4000", 00:11:56.121 "tpoint_mask": "0x0" 00:11:56.121 }, 00:11:56.121 "sock": { 00:11:56.121 "mask": "0x8000", 00:11:56.121 "tpoint_mask": "0x0" 00:11:56.121 } 00:11:56.121 }' 00:11:56.121 09:24:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:11:56.121 09:24:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:11:56.121 09:24:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:11:56.121 09:24:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:11:56.121 09:24:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:11:56.121 09:24:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:11:56.121 09:24:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:11:56.121 09:24:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:11:56.121 09:24:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:11:56.383 09:24:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:11:56.383 00:11:56.383 real 0m0.252s 00:11:56.383 user 0m0.216s 00:11:56.383 sys 0m0.028s 00:11:56.383 09:24:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:56.383 09:24:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.383 ************************************ 00:11:56.383 END TEST rpc_trace_cmd_test 00:11:56.383 ************************************ 00:11:56.383 09:24:49 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:11:56.383 09:24:49 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:11:56.383 09:24:49 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:11:56.383 09:24:49 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:56.383 09:24:49 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:56.383 09:24:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.383 ************************************ 00:11:56.383 START TEST rpc_daemon_integrity 00:11:56.383 ************************************ 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:56.383 { 00:11:56.383 "name": "Malloc2", 00:11:56.383 "aliases": [ 00:11:56.383 "8666269b-f197-4bcc-875f-133eca85ed97" 00:11:56.383 ], 00:11:56.383 "product_name": "Malloc disk", 00:11:56.383 "block_size": 512, 00:11:56.383 "num_blocks": 16384, 00:11:56.383 "uuid": "8666269b-f197-4bcc-875f-133eca85ed97", 00:11:56.383 "assigned_rate_limits": { 00:11:56.383 "rw_ios_per_sec": 0, 00:11:56.383 "rw_mbytes_per_sec": 0, 00:11:56.383 "r_mbytes_per_sec": 0, 00:11:56.383 "w_mbytes_per_sec": 0 00:11:56.383 }, 00:11:56.383 "claimed": false, 00:11:56.383 "zoned": false, 00:11:56.383 "supported_io_types": { 00:11:56.383 "read": true, 00:11:56.383 "write": true, 00:11:56.383 "unmap": true, 00:11:56.383 "write_zeroes": true, 00:11:56.383 "flush": true, 00:11:56.383 "reset": true, 00:11:56.383 "compare": false, 00:11:56.383 "compare_and_write": false, 00:11:56.383 "abort": true, 00:11:56.383 "nvme_admin": false, 00:11:56.383 "nvme_io": false 00:11:56.383 }, 00:11:56.383 "memory_domains": [ 00:11:56.383 { 00:11:56.383 "dma_device_id": "system", 00:11:56.383 "dma_device_type": 1 00:11:56.383 }, 00:11:56.383 { 00:11:56.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.383 "dma_device_type": 2 00:11:56.383 } 00:11:56.383 ], 00:11:56.383 "driver_specific": {} 00:11:56.383 } 00:11:56.383 ]' 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.383 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:56.383 [2024-05-16 09:24:49.918515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:11:56.383 [2024-05-16 09:24:49.918562] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.383 [2024-05-16 09:24:49.918582] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1303680 00:11:56.383 [2024-05-16 09:24:49.918590] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.383 [2024-05-16 09:24:49.920028] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.383 [2024-05-16 09:24:49.920074] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:56.384 Passthru0 00:11:56.384 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.384 09:24:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:56.384 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.384 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:56.646 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.646 09:24:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:56.646 { 00:11:56.646 "name": "Malloc2", 00:11:56.646 "aliases": [ 00:11:56.646 "8666269b-f197-4bcc-875f-133eca85ed97" 00:11:56.646 ], 00:11:56.646 "product_name": "Malloc disk", 00:11:56.646 "block_size": 512, 00:11:56.646 "num_blocks": 16384, 00:11:56.646 "uuid": "8666269b-f197-4bcc-875f-133eca85ed97", 00:11:56.646 "assigned_rate_limits": { 00:11:56.646 "rw_ios_per_sec": 0, 00:11:56.646 "rw_mbytes_per_sec": 0, 00:11:56.646 "r_mbytes_per_sec": 0, 00:11:56.646 "w_mbytes_per_sec": 0 00:11:56.646 }, 00:11:56.646 "claimed": true, 00:11:56.646 "claim_type": "exclusive_write", 00:11:56.646 "zoned": false, 00:11:56.646 "supported_io_types": { 00:11:56.646 "read": true, 00:11:56.646 "write": true, 00:11:56.646 "unmap": true, 00:11:56.646 "write_zeroes": true, 00:11:56.646 "flush": true, 00:11:56.646 "reset": true, 00:11:56.646 "compare": false, 00:11:56.646 "compare_and_write": false, 00:11:56.646 "abort": true, 00:11:56.646 "nvme_admin": false, 00:11:56.646 "nvme_io": false 00:11:56.646 }, 00:11:56.646 "memory_domains": [ 00:11:56.646 { 00:11:56.646 "dma_device_id": "system", 00:11:56.646 "dma_device_type": 1 00:11:56.646 }, 00:11:56.646 { 00:11:56.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.646 "dma_device_type": 2 00:11:56.646 } 00:11:56.646 ], 00:11:56.646 "driver_specific": {} 00:11:56.646 }, 00:11:56.646 { 00:11:56.646 "name": "Passthru0", 00:11:56.646 "aliases": [ 00:11:56.646 "6c45c477-444b-5b56-ac3e-017af8ff807e" 00:11:56.646 ], 00:11:56.646 "product_name": "passthru", 00:11:56.646 "block_size": 512, 00:11:56.646 "num_blocks": 16384, 00:11:56.646 "uuid": "6c45c477-444b-5b56-ac3e-017af8ff807e", 00:11:56.646 "assigned_rate_limits": { 00:11:56.646 "rw_ios_per_sec": 0, 00:11:56.646 "rw_mbytes_per_sec": 0, 00:11:56.646 "r_mbytes_per_sec": 0, 00:11:56.646 "w_mbytes_per_sec": 0 00:11:56.646 }, 00:11:56.646 "claimed": false, 00:11:56.646 "zoned": false, 00:11:56.646 "supported_io_types": { 00:11:56.646 "read": true, 00:11:56.646 "write": true, 00:11:56.646 "unmap": true, 00:11:56.646 "write_zeroes": true, 00:11:56.646 "flush": true, 00:11:56.646 "reset": true, 00:11:56.646 "compare": false, 00:11:56.646 "compare_and_write": false, 00:11:56.646 "abort": true, 00:11:56.646 "nvme_admin": false, 00:11:56.646 "nvme_io": false 00:11:56.646 }, 00:11:56.646 "memory_domains": [ 00:11:56.646 { 00:11:56.646 "dma_device_id": "system", 00:11:56.646 "dma_device_type": 1 00:11:56.646 }, 00:11:56.646 { 00:11:56.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.646 "dma_device_type": 2 00:11:56.646 } 00:11:56.646 ], 00:11:56.646 "driver_specific": { 00:11:56.646 "passthru": { 00:11:56.646 "name": "Passthru0", 00:11:56.646 "base_bdev_name": "Malloc2" 00:11:56.646 } 00:11:56.646 } 00:11:56.646 } 00:11:56.646 ]' 00:11:56.646 09:24:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:56.646 09:24:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:56.646 09:24:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:56.646 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.646 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:56.646 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.646 09:24:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:11:56.646 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.646 09:24:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:56.646 09:24:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.646 09:24:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:56.646 09:24:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.646 09:24:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:56.646 09:24:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.646 09:24:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:56.646 09:24:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:56.646 09:24:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:56.646 00:11:56.646 real 0m0.294s 00:11:56.646 user 0m0.180s 00:11:56.646 sys 0m0.050s 00:11:56.646 09:24:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:56.646 09:24:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:56.646 ************************************ 00:11:56.646 END TEST rpc_daemon_integrity 00:11:56.646 ************************************ 00:11:56.646 09:24:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:11:56.646 09:24:50 rpc -- rpc/rpc.sh@84 -- # killprocess 129858 00:11:56.646 09:24:50 rpc -- common/autotest_common.sh@946 -- # '[' -z 129858 ']' 00:11:56.646 09:24:50 rpc -- common/autotest_common.sh@950 -- # kill -0 129858 00:11:56.646 09:24:50 rpc -- common/autotest_common.sh@951 -- # uname 00:11:56.646 09:24:50 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:56.646 09:24:50 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 129858 00:11:56.646 09:24:50 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:56.646 09:24:50 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:56.646 09:24:50 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 129858' 00:11:56.646 killing process with pid 129858 00:11:56.646 09:24:50 rpc -- common/autotest_common.sh@965 -- # kill 129858 00:11:56.646 09:24:50 rpc -- common/autotest_common.sh@970 -- # wait 129858 00:11:56.907 00:11:56.907 real 0m2.579s 00:11:56.907 user 0m3.315s 00:11:56.907 sys 0m0.799s 00:11:56.907 09:24:50 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:56.907 09:24:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.907 ************************************ 00:11:56.907 END TEST rpc 00:11:56.907 ************************************ 00:11:56.907 09:24:50 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:11:56.907 09:24:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:56.907 09:24:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:56.907 09:24:50 -- common/autotest_common.sh@10 -- # set +x 00:11:57.169 ************************************ 00:11:57.169 START TEST skip_rpc 00:11:57.169 ************************************ 00:11:57.169 09:24:50 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:11:57.169 * Looking for test storage... 00:11:57.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:11:57.169 09:24:50 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:11:57.169 09:24:50 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:11:57.169 09:24:50 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:11:57.169 09:24:50 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:57.169 09:24:50 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:57.169 09:24:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.169 ************************************ 00:11:57.169 START TEST skip_rpc 00:11:57.169 ************************************ 00:11:57.169 09:24:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:11:57.169 09:24:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=130528 00:11:57.169 09:24:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:57.169 09:24:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:11:57.169 09:24:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:11:57.169 [2024-05-16 09:24:50.698938] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:11:57.169 [2024-05-16 09:24:50.699007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130528 ] 00:11:57.431 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.431 [2024-05-16 09:24:50.780418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.431 [2024-05-16 09:24:50.877638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 130528 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 130528 ']' 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 130528 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:02.722 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 130528 00:12:02.723 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:02.723 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:02.723 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 130528' 00:12:02.723 killing process with pid 130528 00:12:02.723 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 130528 00:12:02.723 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 130528 00:12:02.723 00:12:02.723 real 0m5.249s 00:12:02.723 user 0m5.000s 00:12:02.723 sys 0m0.276s 00:12:02.723 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:02.723 09:24:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.723 ************************************ 00:12:02.723 END TEST skip_rpc 00:12:02.723 ************************************ 00:12:02.723 09:24:55 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:12:02.723 09:24:55 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:02.723 09:24:55 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:02.723 09:24:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.723 ************************************ 00:12:02.723 START TEST skip_rpc_with_json 00:12:02.723 ************************************ 00:12:02.723 09:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:12:02.723 09:24:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:12:02.723 09:24:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=131702 00:12:02.723 09:24:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:12:02.723 09:24:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 131702 00:12:02.723 09:24:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:12:02.723 09:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 131702 ']' 00:12:02.723 09:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.723 09:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:02.723 09:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.723 09:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:02.723 09:24:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:02.723 [2024-05-16 09:24:56.032085] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:02.723 [2024-05-16 09:24:56.032147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131702 ] 00:12:02.723 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.723 [2024-05-16 09:24:56.107532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.723 [2024-05-16 09:24:56.167199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.294 09:24:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:03.294 09:24:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:12:03.294 09:24:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:12:03.294 09:24:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.294 09:24:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:03.294 [2024-05-16 09:24:56.795210] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:12:03.294 request: 00:12:03.294 { 00:12:03.294 "trtype": "tcp", 00:12:03.294 "method": "nvmf_get_transports", 00:12:03.294 "req_id": 1 00:12:03.294 } 00:12:03.294 Got JSON-RPC error response 00:12:03.294 response: 00:12:03.294 { 00:12:03.294 "code": -19, 00:12:03.294 "message": "No such device" 00:12:03.294 } 00:12:03.294 09:24:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:12:03.294 09:24:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:12:03.295 09:24:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.295 09:24:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:03.295 [2024-05-16 09:24:56.807300] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.295 09:24:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.295 09:24:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:12:03.295 09:24:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.295 09:24:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:03.555 09:24:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.555 09:24:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:12:03.555 { 00:12:03.555 "subsystems": [ 00:12:03.555 { 00:12:03.555 "subsystem": "vfio_user_target", 00:12:03.555 "config": null 00:12:03.555 }, 00:12:03.555 { 00:12:03.555 "subsystem": "keyring", 00:12:03.555 "config": [] 00:12:03.555 }, 00:12:03.555 { 00:12:03.555 "subsystem": "iobuf", 00:12:03.555 "config": [ 00:12:03.555 { 00:12:03.555 "method": "iobuf_set_options", 00:12:03.556 "params": { 00:12:03.556 "small_pool_count": 8192, 00:12:03.556 "large_pool_count": 1024, 00:12:03.556 "small_bufsize": 8192, 00:12:03.556 "large_bufsize": 135168 00:12:03.556 } 00:12:03.556 } 00:12:03.556 ] 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "subsystem": "sock", 00:12:03.556 "config": [ 00:12:03.556 { 00:12:03.556 "method": "sock_impl_set_options", 00:12:03.556 "params": { 00:12:03.556 "impl_name": "posix", 00:12:03.556 "recv_buf_size": 2097152, 00:12:03.556 "send_buf_size": 2097152, 00:12:03.556 "enable_recv_pipe": true, 00:12:03.556 "enable_quickack": false, 00:12:03.556 "enable_placement_id": 0, 00:12:03.556 "enable_zerocopy_send_server": true, 00:12:03.556 "enable_zerocopy_send_client": false, 00:12:03.556 "zerocopy_threshold": 0, 00:12:03.556 "tls_version": 0, 00:12:03.556 "enable_ktls": false 00:12:03.556 } 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "method": "sock_impl_set_options", 00:12:03.556 "params": { 00:12:03.556 "impl_name": "ssl", 00:12:03.556 "recv_buf_size": 4096, 00:12:03.556 "send_buf_size": 4096, 00:12:03.556 "enable_recv_pipe": true, 00:12:03.556 "enable_quickack": false, 00:12:03.556 "enable_placement_id": 0, 00:12:03.556 "enable_zerocopy_send_server": true, 00:12:03.556 "enable_zerocopy_send_client": false, 00:12:03.556 "zerocopy_threshold": 0, 00:12:03.556 "tls_version": 0, 00:12:03.556 "enable_ktls": false 00:12:03.556 } 00:12:03.556 } 00:12:03.556 ] 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "subsystem": "vmd", 00:12:03.556 "config": [] 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "subsystem": "accel", 00:12:03.556 "config": [ 00:12:03.556 { 00:12:03.556 "method": "accel_set_options", 00:12:03.556 "params": { 00:12:03.556 "small_cache_size": 128, 00:12:03.556 "large_cache_size": 16, 00:12:03.556 "task_count": 2048, 00:12:03.556 "sequence_count": 2048, 00:12:03.556 "buf_count": 2048 00:12:03.556 } 00:12:03.556 } 00:12:03.556 ] 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "subsystem": "bdev", 00:12:03.556 "config": [ 00:12:03.556 { 00:12:03.556 "method": "bdev_set_options", 00:12:03.556 "params": { 00:12:03.556 "bdev_io_pool_size": 65535, 00:12:03.556 "bdev_io_cache_size": 256, 00:12:03.556 "bdev_auto_examine": true, 00:12:03.556 "iobuf_small_cache_size": 128, 00:12:03.556 "iobuf_large_cache_size": 16 00:12:03.556 } 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "method": "bdev_raid_set_options", 00:12:03.556 "params": { 00:12:03.556 "process_window_size_kb": 1024 00:12:03.556 } 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "method": "bdev_iscsi_set_options", 00:12:03.556 "params": { 00:12:03.556 "timeout_sec": 30 00:12:03.556 } 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "method": "bdev_nvme_set_options", 00:12:03.556 "params": { 00:12:03.556 "action_on_timeout": "none", 00:12:03.556 "timeout_us": 0, 00:12:03.556 "timeout_admin_us": 0, 00:12:03.556 "keep_alive_timeout_ms": 10000, 00:12:03.556 "arbitration_burst": 0, 00:12:03.556 "low_priority_weight": 0, 00:12:03.556 "medium_priority_weight": 0, 00:12:03.556 "high_priority_weight": 0, 00:12:03.556 "nvme_adminq_poll_period_us": 10000, 00:12:03.556 "nvme_ioq_poll_period_us": 0, 00:12:03.556 "io_queue_requests": 0, 00:12:03.556 "delay_cmd_submit": true, 00:12:03.556 "transport_retry_count": 4, 00:12:03.556 "bdev_retry_count": 3, 00:12:03.556 "transport_ack_timeout": 0, 00:12:03.556 "ctrlr_loss_timeout_sec": 0, 00:12:03.556 "reconnect_delay_sec": 0, 00:12:03.556 "fast_io_fail_timeout_sec": 0, 00:12:03.556 "disable_auto_failback": false, 00:12:03.556 "generate_uuids": false, 00:12:03.556 "transport_tos": 0, 00:12:03.556 "nvme_error_stat": false, 00:12:03.556 "rdma_srq_size": 0, 00:12:03.556 "io_path_stat": false, 00:12:03.556 "allow_accel_sequence": false, 00:12:03.556 "rdma_max_cq_size": 0, 00:12:03.556 "rdma_cm_event_timeout_ms": 0, 00:12:03.556 "dhchap_digests": [ 00:12:03.556 "sha256", 00:12:03.556 "sha384", 00:12:03.556 "sha512" 00:12:03.556 ], 00:12:03.556 "dhchap_dhgroups": [ 00:12:03.556 "null", 00:12:03.556 "ffdhe2048", 00:12:03.556 "ffdhe3072", 00:12:03.556 "ffdhe4096", 00:12:03.556 "ffdhe6144", 00:12:03.556 "ffdhe8192" 00:12:03.556 ] 00:12:03.556 } 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "method": "bdev_nvme_set_hotplug", 00:12:03.556 "params": { 00:12:03.556 "period_us": 100000, 00:12:03.556 "enable": false 00:12:03.556 } 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "method": "bdev_wait_for_examine" 00:12:03.556 } 00:12:03.556 ] 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "subsystem": "scsi", 00:12:03.556 "config": null 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "subsystem": "scheduler", 00:12:03.556 "config": [ 00:12:03.556 { 00:12:03.556 "method": "framework_set_scheduler", 00:12:03.556 "params": { 00:12:03.556 "name": "static" 00:12:03.556 } 00:12:03.556 } 00:12:03.556 ] 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "subsystem": "vhost_scsi", 00:12:03.556 "config": [] 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "subsystem": "vhost_blk", 00:12:03.556 "config": [] 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "subsystem": "ublk", 00:12:03.556 "config": [] 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "subsystem": "nbd", 00:12:03.556 "config": [] 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "subsystem": "nvmf", 00:12:03.556 "config": [ 00:12:03.556 { 00:12:03.556 "method": "nvmf_set_config", 00:12:03.556 "params": { 00:12:03.556 "discovery_filter": "match_any", 00:12:03.556 "admin_cmd_passthru": { 00:12:03.556 "identify_ctrlr": false 00:12:03.556 } 00:12:03.556 } 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "method": "nvmf_set_max_subsystems", 00:12:03.556 "params": { 00:12:03.556 "max_subsystems": 1024 00:12:03.556 } 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "method": "nvmf_set_crdt", 00:12:03.556 "params": { 00:12:03.556 "crdt1": 0, 00:12:03.556 "crdt2": 0, 00:12:03.556 "crdt3": 0 00:12:03.556 } 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "method": "nvmf_create_transport", 00:12:03.556 "params": { 00:12:03.556 "trtype": "TCP", 00:12:03.556 "max_queue_depth": 128, 00:12:03.556 "max_io_qpairs_per_ctrlr": 127, 00:12:03.556 "in_capsule_data_size": 4096, 00:12:03.556 "max_io_size": 131072, 00:12:03.556 "io_unit_size": 131072, 00:12:03.556 "max_aq_depth": 128, 00:12:03.556 "num_shared_buffers": 511, 00:12:03.556 "buf_cache_size": 4294967295, 00:12:03.556 "dif_insert_or_strip": false, 00:12:03.556 "zcopy": false, 00:12:03.556 "c2h_success": true, 00:12:03.556 "sock_priority": 0, 00:12:03.556 "abort_timeout_sec": 1, 00:12:03.556 "ack_timeout": 0, 00:12:03.556 "data_wr_pool_size": 0 00:12:03.556 } 00:12:03.556 } 00:12:03.556 ] 00:12:03.556 }, 00:12:03.556 { 00:12:03.556 "subsystem": "iscsi", 00:12:03.556 "config": [ 00:12:03.556 { 00:12:03.556 "method": "iscsi_set_options", 00:12:03.556 "params": { 00:12:03.556 "node_base": "iqn.2016-06.io.spdk", 00:12:03.556 "max_sessions": 128, 00:12:03.556 "max_connections_per_session": 2, 00:12:03.556 "max_queue_depth": 64, 00:12:03.556 "default_time2wait": 2, 00:12:03.556 "default_time2retain": 20, 00:12:03.556 "first_burst_length": 8192, 00:12:03.556 "immediate_data": true, 00:12:03.556 "allow_duplicated_isid": false, 00:12:03.556 "error_recovery_level": 0, 00:12:03.556 "nop_timeout": 60, 00:12:03.556 "nop_in_interval": 30, 00:12:03.556 "disable_chap": false, 00:12:03.556 "require_chap": false, 00:12:03.556 "mutual_chap": false, 00:12:03.556 "chap_group": 0, 00:12:03.556 "max_large_datain_per_connection": 64, 00:12:03.556 "max_r2t_per_connection": 4, 00:12:03.556 "pdu_pool_size": 36864, 00:12:03.556 "immediate_data_pool_size": 16384, 00:12:03.556 "data_out_pool_size": 2048 00:12:03.556 } 00:12:03.556 } 00:12:03.556 ] 00:12:03.556 } 00:12:03.556 ] 00:12:03.556 } 00:12:03.556 09:24:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:03.556 09:24:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 131702 00:12:03.556 09:24:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 131702 ']' 00:12:03.556 09:24:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 131702 00:12:03.556 09:24:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:12:03.556 09:24:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:03.556 09:24:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 131702 00:12:03.556 09:24:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:03.556 09:24:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:03.556 09:24:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 131702' 00:12:03.556 killing process with pid 131702 00:12:03.556 09:24:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 131702 00:12:03.556 09:24:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 131702 00:12:03.817 09:24:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=131801 00:12:03.817 09:24:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:12:03.817 09:24:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:12:09.108 09:25:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 131801 00:12:09.108 09:25:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 131801 ']' 00:12:09.108 09:25:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 131801 00:12:09.108 09:25:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:12:09.108 09:25:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:09.108 09:25:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 131801 00:12:09.108 09:25:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:09.108 09:25:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:09.108 09:25:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 131801' 00:12:09.108 killing process with pid 131801 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 131801 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 131801 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:12:09.109 00:12:09.109 real 0m6.488s 00:12:09.109 user 0m6.361s 00:12:09.109 sys 0m0.536s 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:09.109 ************************************ 00:12:09.109 END TEST skip_rpc_with_json 00:12:09.109 ************************************ 00:12:09.109 09:25:02 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:12:09.109 09:25:02 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:09.109 09:25:02 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:09.109 09:25:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.109 ************************************ 00:12:09.109 START TEST skip_rpc_with_delay 00:12:09.109 ************************************ 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:12:09.109 [2024-05-16 09:25:02.608981] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:12:09.109 [2024-05-16 09:25:02.609092] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:09.109 00:12:09.109 real 0m0.075s 00:12:09.109 user 0m0.050s 00:12:09.109 sys 0m0.024s 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:09.109 09:25:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:12:09.109 ************************************ 00:12:09.109 END TEST skip_rpc_with_delay 00:12:09.109 ************************************ 00:12:09.109 09:25:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:12:09.370 09:25:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:12:09.370 09:25:02 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:12:09.370 09:25:02 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:09.370 09:25:02 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:09.370 09:25:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.370 ************************************ 00:12:09.370 START TEST exit_on_failed_rpc_init 00:12:09.370 ************************************ 00:12:09.370 09:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:12:09.370 09:25:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=133119 00:12:09.370 09:25:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 133119 00:12:09.370 09:25:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:12:09.370 09:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 133119 ']' 00:12:09.370 09:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.370 09:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:09.370 09:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.370 09:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:09.370 09:25:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:12:09.370 [2024-05-16 09:25:02.775382] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:09.370 [2024-05-16 09:25:02.775442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133119 ] 00:12:09.370 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.370 [2024-05-16 09:25:02.850435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.370 [2024-05-16 09:25:02.906680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:12:10.312 [2024-05-16 09:25:03.615089] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:10.312 [2024-05-16 09:25:03.615138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133158 ] 00:12:10.312 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.312 [2024-05-16 09:25:03.689245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.312 [2024-05-16 09:25:03.753916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.312 [2024-05-16 09:25:03.753975] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:12:10.312 [2024-05-16 09:25:03.753985] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:12:10.312 [2024-05-16 09:25:03.753992] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 133119 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 133119 ']' 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 133119 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 133119 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 133119' 00:12:10.312 killing process with pid 133119 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 133119 00:12:10.312 09:25:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 133119 00:12:10.571 00:12:10.571 real 0m1.344s 00:12:10.571 user 0m1.583s 00:12:10.571 sys 0m0.388s 00:12:10.571 09:25:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:10.571 09:25:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:12:10.571 ************************************ 00:12:10.571 END TEST exit_on_failed_rpc_init 00:12:10.571 ************************************ 00:12:10.571 09:25:04 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:12:10.571 00:12:10.571 real 0m13.605s 00:12:10.571 user 0m13.153s 00:12:10.571 sys 0m1.524s 00:12:10.571 09:25:04 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:10.571 09:25:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.571 ************************************ 00:12:10.571 END TEST skip_rpc 00:12:10.571 ************************************ 00:12:10.832 09:25:04 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:12:10.832 09:25:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:10.832 09:25:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:10.832 09:25:04 -- common/autotest_common.sh@10 -- # set +x 00:12:10.832 ************************************ 00:12:10.832 START TEST rpc_client 00:12:10.832 ************************************ 00:12:10.832 09:25:04 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:12:10.832 * Looking for test storage... 00:12:10.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:12:10.832 09:25:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:12:10.832 OK 00:12:10.832 09:25:04 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:12:10.832 00:12:10.832 real 0m0.129s 00:12:10.832 user 0m0.057s 00:12:10.832 sys 0m0.079s 00:12:10.832 09:25:04 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:10.832 09:25:04 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:12:10.832 ************************************ 00:12:10.832 END TEST rpc_client 00:12:10.832 ************************************ 00:12:10.832 09:25:04 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:12:10.832 09:25:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:10.832 09:25:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:10.832 09:25:04 -- common/autotest_common.sh@10 -- # set +x 00:12:11.094 ************************************ 00:12:11.094 START TEST json_config 00:12:11.094 ************************************ 00:12:11.094 09:25:04 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@7 -- # uname -s 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:11.094 09:25:04 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.094 09:25:04 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.094 09:25:04 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.094 09:25:04 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.094 09:25:04 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.094 09:25:04 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.094 09:25:04 json_config -- paths/export.sh@5 -- # export PATH 00:12:11.094 09:25:04 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@47 -- # : 0 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:11.094 09:25:04 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:12:11.094 INFO: JSON configuration test init 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:12:11.094 09:25:04 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:11.094 09:25:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:12:11.094 09:25:04 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:11.094 09:25:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:11.094 09:25:04 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:12:11.094 09:25:04 json_config -- json_config/common.sh@9 -- # local app=target 00:12:11.094 09:25:04 json_config -- json_config/common.sh@10 -- # shift 00:12:11.094 09:25:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:11.094 09:25:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:11.094 09:25:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:12:11.094 09:25:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:11.094 09:25:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:11.094 09:25:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=133598 00:12:11.094 09:25:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:11.094 Waiting for target to run... 00:12:11.094 09:25:04 json_config -- json_config/common.sh@25 -- # waitforlisten 133598 /var/tmp/spdk_tgt.sock 00:12:11.094 09:25:04 json_config -- common/autotest_common.sh@827 -- # '[' -z 133598 ']' 00:12:11.094 09:25:04 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:11.094 09:25:04 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:11.094 09:25:04 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:11.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:11.094 09:25:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:12:11.094 09:25:04 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:11.094 09:25:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:11.094 [2024-05-16 09:25:04.571019] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:11.094 [2024-05-16 09:25:04.571091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133598 ] 00:12:11.094 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.355 [2024-05-16 09:25:04.867862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.355 [2024-05-16 09:25:04.911951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.926 09:25:05 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:11.926 09:25:05 json_config -- common/autotest_common.sh@860 -- # return 0 00:12:11.926 09:25:05 json_config -- json_config/common.sh@26 -- # echo '' 00:12:11.926 00:12:11.926 09:25:05 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:12:11.926 09:25:05 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:12:11.926 09:25:05 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:11.926 09:25:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:11.926 09:25:05 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:12:11.926 09:25:05 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:12:11.926 09:25:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:11.926 09:25:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:11.926 09:25:05 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:12:11.926 09:25:05 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:12:11.926 09:25:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:12:12.497 09:25:05 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:12:12.497 09:25:05 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:12:12.497 09:25:05 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:12.497 09:25:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:12.497 09:25:05 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:12:12.497 09:25:05 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:12:12.497 09:25:05 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:12:12.497 09:25:05 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:12:12.497 09:25:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:12:12.497 09:25:05 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:12:12.759 09:25:06 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:12:12.759 09:25:06 json_config -- json_config/json_config.sh@48 -- # local get_types 00:12:12.759 09:25:06 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:12:12.759 09:25:06 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:12:12.759 09:25:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:12.759 09:25:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:12.759 09:25:06 json_config -- json_config/json_config.sh@55 -- # return 0 00:12:12.759 09:25:06 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:12:12.759 09:25:06 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:12:12.759 09:25:06 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:12:12.759 09:25:06 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:12:12.759 09:25:06 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:12:12.759 09:25:06 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:12:12.759 09:25:06 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:12.759 09:25:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:12.759 09:25:06 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:12:12.759 09:25:06 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:12:12.759 09:25:06 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:12:12.759 09:25:06 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:12:12.759 09:25:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:12:12.759 MallocForNvmf0 00:12:12.759 09:25:06 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:12:12.759 09:25:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:12:13.020 MallocForNvmf1 00:12:13.020 09:25:06 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:12:13.020 09:25:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:12:13.290 [2024-05-16 09:25:06.592534] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.290 09:25:06 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:13.290 09:25:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:13.290 09:25:06 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:12:13.290 09:25:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:12:13.552 09:25:06 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:12:13.552 09:25:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:12:13.552 09:25:07 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:12:13.552 09:25:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:12:13.813 [2024-05-16 09:25:07.186028] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:13.813 [2024-05-16 09:25:07.186358] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:12:13.813 09:25:07 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:12:13.813 09:25:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:13.813 09:25:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:13.813 09:25:07 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:12:13.813 09:25:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:13.813 09:25:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:13.813 09:25:07 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:12:13.813 09:25:07 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:12:13.813 09:25:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:12:14.074 MallocBdevForConfigChangeCheck 00:12:14.074 09:25:07 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:12:14.074 09:25:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:14.074 09:25:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:14.074 09:25:07 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:12:14.074 09:25:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:14.334 09:25:07 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:12:14.334 INFO: shutting down applications... 00:12:14.334 09:25:07 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:12:14.334 09:25:07 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:12:14.334 09:25:07 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:12:14.334 09:25:07 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:12:14.594 Calling clear_iscsi_subsystem 00:12:14.594 Calling clear_nvmf_subsystem 00:12:14.594 Calling clear_nbd_subsystem 00:12:14.594 Calling clear_ublk_subsystem 00:12:14.594 Calling clear_vhost_blk_subsystem 00:12:14.594 Calling clear_vhost_scsi_subsystem 00:12:14.594 Calling clear_bdev_subsystem 00:12:14.854 09:25:08 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:12:14.854 09:25:08 json_config -- json_config/json_config.sh@343 -- # count=100 00:12:14.854 09:25:08 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:12:14.854 09:25:08 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:12:14.854 09:25:08 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:14.854 09:25:08 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:12:15.115 09:25:08 json_config -- json_config/json_config.sh@345 -- # break 00:12:15.115 09:25:08 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:12:15.115 09:25:08 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:12:15.115 09:25:08 json_config -- json_config/common.sh@31 -- # local app=target 00:12:15.115 09:25:08 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:12:15.115 09:25:08 json_config -- json_config/common.sh@35 -- # [[ -n 133598 ]] 00:12:15.115 09:25:08 json_config -- json_config/common.sh@38 -- # kill -SIGINT 133598 00:12:15.115 [2024-05-16 09:25:08.520241] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:15.115 09:25:08 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:12:15.115 09:25:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:15.115 09:25:08 json_config -- json_config/common.sh@41 -- # kill -0 133598 00:12:15.115 09:25:08 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:12:15.687 09:25:09 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:12:15.687 09:25:09 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:15.687 09:25:09 json_config -- json_config/common.sh@41 -- # kill -0 133598 00:12:15.687 09:25:09 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:12:15.687 09:25:09 json_config -- json_config/common.sh@43 -- # break 00:12:15.687 09:25:09 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:12:15.687 09:25:09 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:12:15.687 SPDK target shutdown done 00:12:15.687 09:25:09 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:12:15.687 INFO: relaunching applications... 00:12:15.687 09:25:09 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:12:15.687 09:25:09 json_config -- json_config/common.sh@9 -- # local app=target 00:12:15.687 09:25:09 json_config -- json_config/common.sh@10 -- # shift 00:12:15.687 09:25:09 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:15.687 09:25:09 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:15.687 09:25:09 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:12:15.687 09:25:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:15.687 09:25:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:15.687 09:25:09 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=134507 00:12:15.687 09:25:09 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:15.687 Waiting for target to run... 00:12:15.687 09:25:09 json_config -- json_config/common.sh@25 -- # waitforlisten 134507 /var/tmp/spdk_tgt.sock 00:12:15.687 09:25:09 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:12:15.687 09:25:09 json_config -- common/autotest_common.sh@827 -- # '[' -z 134507 ']' 00:12:15.687 09:25:09 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:15.687 09:25:09 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:15.687 09:25:09 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:15.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:15.687 09:25:09 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:15.687 09:25:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:15.687 [2024-05-16 09:25:09.080423] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:15.687 [2024-05-16 09:25:09.080485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134507 ] 00:12:15.687 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.948 [2024-05-16 09:25:09.348173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.948 [2024-05-16 09:25:09.401594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.520 [2024-05-16 09:25:09.878737] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.520 [2024-05-16 09:25:09.910742] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:16.520 [2024-05-16 09:25:09.911060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:12:16.520 09:25:09 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:16.520 09:25:09 json_config -- common/autotest_common.sh@860 -- # return 0 00:12:16.520 09:25:09 json_config -- json_config/common.sh@26 -- # echo '' 00:12:16.520 00:12:16.520 09:25:09 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:12:16.520 09:25:09 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:12:16.520 INFO: Checking if target configuration is the same... 00:12:16.520 09:25:09 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:12:16.520 09:25:09 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:12:16.520 09:25:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:16.520 + '[' 2 -ne 2 ']' 00:12:16.520 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:12:16.520 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:12:16.520 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:16.520 +++ basename /dev/fd/62 00:12:16.520 ++ mktemp /tmp/62.XXX 00:12:16.520 + tmp_file_1=/tmp/62.V0Q 00:12:16.520 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:12:16.520 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:12:16.520 + tmp_file_2=/tmp/spdk_tgt_config.json.7Th 00:12:16.520 + ret=0 00:12:16.520 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:12:16.781 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:12:16.781 + diff -u /tmp/62.V0Q /tmp/spdk_tgt_config.json.7Th 00:12:16.781 + echo 'INFO: JSON config files are the same' 00:12:16.781 INFO: JSON config files are the same 00:12:16.781 + rm /tmp/62.V0Q /tmp/spdk_tgt_config.json.7Th 00:12:16.781 + exit 0 00:12:16.781 09:25:10 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:12:16.781 09:25:10 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:12:16.781 INFO: changing configuration and checking if this can be detected... 00:12:16.781 09:25:10 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:12:16.781 09:25:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:12:17.043 09:25:10 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:12:17.043 09:25:10 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:12:17.043 09:25:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:17.043 + '[' 2 -ne 2 ']' 00:12:17.043 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:12:17.043 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:12:17.043 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:17.043 +++ basename /dev/fd/62 00:12:17.043 ++ mktemp /tmp/62.XXX 00:12:17.043 + tmp_file_1=/tmp/62.FuV 00:12:17.043 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:12:17.043 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:12:17.043 + tmp_file_2=/tmp/spdk_tgt_config.json.bMP 00:12:17.043 + ret=0 00:12:17.043 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:12:17.304 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:12:17.304 + diff -u /tmp/62.FuV /tmp/spdk_tgt_config.json.bMP 00:12:17.304 + ret=1 00:12:17.304 + echo '=== Start of file: /tmp/62.FuV ===' 00:12:17.304 + cat /tmp/62.FuV 00:12:17.304 + echo '=== End of file: /tmp/62.FuV ===' 00:12:17.304 + echo '' 00:12:17.304 + echo '=== Start of file: /tmp/spdk_tgt_config.json.bMP ===' 00:12:17.304 + cat /tmp/spdk_tgt_config.json.bMP 00:12:17.304 + echo '=== End of file: /tmp/spdk_tgt_config.json.bMP ===' 00:12:17.304 + echo '' 00:12:17.304 + rm /tmp/62.FuV /tmp/spdk_tgt_config.json.bMP 00:12:17.304 + exit 1 00:12:17.304 09:25:10 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:12:17.304 INFO: configuration change detected. 00:12:17.304 09:25:10 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:12:17.304 09:25:10 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:12:17.304 09:25:10 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:17.304 09:25:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:17.304 09:25:10 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:12:17.304 09:25:10 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:12:17.304 09:25:10 json_config -- json_config/json_config.sh@317 -- # [[ -n 134507 ]] 00:12:17.304 09:25:10 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:12:17.304 09:25:10 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:12:17.304 09:25:10 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:17.304 09:25:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:17.304 09:25:10 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:12:17.304 09:25:10 json_config -- json_config/json_config.sh@193 -- # uname -s 00:12:17.304 09:25:10 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:12:17.304 09:25:10 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:12:17.304 09:25:10 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:12:17.304 09:25:10 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:12:17.304 09:25:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:17.305 09:25:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:17.565 09:25:10 json_config -- json_config/json_config.sh@323 -- # killprocess 134507 00:12:17.565 09:25:10 json_config -- common/autotest_common.sh@946 -- # '[' -z 134507 ']' 00:12:17.565 09:25:10 json_config -- common/autotest_common.sh@950 -- # kill -0 134507 00:12:17.565 09:25:10 json_config -- common/autotest_common.sh@951 -- # uname 00:12:17.565 09:25:10 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:17.565 09:25:10 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 134507 00:12:17.565 09:25:10 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:17.565 09:25:10 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:17.566 09:25:10 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 134507' 00:12:17.566 killing process with pid 134507 00:12:17.566 09:25:10 json_config -- common/autotest_common.sh@965 -- # kill 134507 00:12:17.566 [2024-05-16 09:25:10.949198] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:17.566 09:25:10 json_config -- common/autotest_common.sh@970 -- # wait 134507 00:12:17.827 09:25:11 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:12:17.827 09:25:11 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:12:17.827 09:25:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:17.827 09:25:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:17.827 09:25:11 json_config -- json_config/json_config.sh@328 -- # return 0 00:12:17.827 09:25:11 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:12:17.827 INFO: Success 00:12:17.827 00:12:17.827 real 0m6.853s 00:12:17.827 user 0m8.398s 00:12:17.827 sys 0m1.617s 00:12:17.827 09:25:11 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:17.827 09:25:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:17.827 ************************************ 00:12:17.827 END TEST json_config 00:12:17.827 ************************************ 00:12:17.828 09:25:11 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:12:17.828 09:25:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:17.828 09:25:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:17.828 09:25:11 -- common/autotest_common.sh@10 -- # set +x 00:12:17.828 ************************************ 00:12:17.828 START TEST json_config_extra_key 00:12:17.828 ************************************ 00:12:17.828 09:25:11 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:12:18.089 09:25:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.089 09:25:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:12:18.089 09:25:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.089 09:25:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.089 09:25:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.089 09:25:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.089 09:25:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.089 09:25:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.089 09:25:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.089 09:25:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.089 09:25:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.089 09:25:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.089 09:25:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:12:18.089 09:25:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:12:18.089 09:25:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.089 09:25:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.089 09:25:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:18.089 09:25:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.089 09:25:11 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.089 09:25:11 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.089 09:25:11 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.089 09:25:11 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.090 09:25:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.090 09:25:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.090 09:25:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.090 09:25:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:12:18.090 09:25:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.090 09:25:11 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:12:18.090 09:25:11 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:18.090 09:25:11 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:18.090 09:25:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.090 09:25:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.090 09:25:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.090 09:25:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:18.090 09:25:11 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:18.090 09:25:11 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:18.090 09:25:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:12:18.090 09:25:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:12:18.090 09:25:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:12:18.090 09:25:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:12:18.090 09:25:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:12:18.090 09:25:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:12:18.090 09:25:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:12:18.090 09:25:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:12:18.090 09:25:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:12:18.090 09:25:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:12:18.090 09:25:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:12:18.090 INFO: launching applications... 00:12:18.090 09:25:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:12:18.090 09:25:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:12:18.090 09:25:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:12:18.090 09:25:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:18.090 09:25:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:18.090 09:25:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:12:18.090 09:25:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:18.090 09:25:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:18.090 09:25:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=135181 00:12:18.090 09:25:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:18.090 Waiting for target to run... 00:12:18.090 09:25:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 135181 /var/tmp/spdk_tgt.sock 00:12:18.090 09:25:11 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 135181 ']' 00:12:18.090 09:25:11 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:18.090 09:25:11 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:12:18.090 09:25:11 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:18.090 09:25:11 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:18.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:18.090 09:25:11 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:18.090 09:25:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:12:18.090 [2024-05-16 09:25:11.489887] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:18.090 [2024-05-16 09:25:11.489962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135181 ] 00:12:18.090 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.351 [2024-05-16 09:25:11.750788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.351 [2024-05-16 09:25:11.795254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.925 09:25:12 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:18.925 09:25:12 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:12:18.925 09:25:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:12:18.925 00:12:18.925 09:25:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:12:18.925 INFO: shutting down applications... 00:12:18.925 09:25:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:12:18.925 09:25:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:12:18.925 09:25:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:12:18.925 09:25:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 135181 ]] 00:12:18.925 09:25:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 135181 00:12:18.925 09:25:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:12:18.925 09:25:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:18.925 09:25:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 135181 00:12:18.925 09:25:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:19.499 09:25:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:19.499 09:25:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:19.499 09:25:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 135181 00:12:19.499 09:25:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:12:19.499 09:25:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:12:19.499 09:25:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:12:19.499 09:25:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:12:19.499 SPDK target shutdown done 00:12:19.499 09:25:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:12:19.499 Success 00:12:19.499 00:12:19.499 real 0m1.435s 00:12:19.499 user 0m1.067s 00:12:19.499 sys 0m0.361s 00:12:19.499 09:25:12 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:19.499 09:25:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:12:19.499 ************************************ 00:12:19.499 END TEST json_config_extra_key 00:12:19.499 ************************************ 00:12:19.499 09:25:12 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:19.499 09:25:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:19.499 09:25:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:19.499 09:25:12 -- common/autotest_common.sh@10 -- # set +x 00:12:19.499 ************************************ 00:12:19.499 START TEST alias_rpc 00:12:19.499 ************************************ 00:12:19.499 09:25:12 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:19.499 * Looking for test storage... 00:12:19.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:12:19.499 09:25:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:19.499 09:25:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=135560 00:12:19.499 09:25:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 135560 00:12:19.499 09:25:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:19.499 09:25:12 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 135560 ']' 00:12:19.499 09:25:12 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.499 09:25:12 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:19.499 09:25:12 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.499 09:25:12 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:19.499 09:25:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.499 [2024-05-16 09:25:13.002197] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:19.499 [2024-05-16 09:25:13.002262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135560 ] 00:12:19.499 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.760 [2024-05-16 09:25:13.083512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.761 [2024-05-16 09:25:13.146380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.336 09:25:13 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:20.336 09:25:13 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:20.336 09:25:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:12:20.598 09:25:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 135560 00:12:20.599 09:25:13 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 135560 ']' 00:12:20.599 09:25:13 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 135560 00:12:20.599 09:25:13 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:12:20.599 09:25:13 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:20.599 09:25:13 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 135560 00:12:20.599 09:25:14 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:20.599 09:25:14 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:20.599 09:25:14 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 135560' 00:12:20.599 killing process with pid 135560 00:12:20.599 09:25:14 alias_rpc -- common/autotest_common.sh@965 -- # kill 135560 00:12:20.599 09:25:14 alias_rpc -- common/autotest_common.sh@970 -- # wait 135560 00:12:20.860 00:12:20.860 real 0m1.370s 00:12:20.860 user 0m1.518s 00:12:20.860 sys 0m0.383s 00:12:20.860 09:25:14 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:20.860 09:25:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.860 ************************************ 00:12:20.860 END TEST alias_rpc 00:12:20.860 ************************************ 00:12:20.860 09:25:14 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:12:20.860 09:25:14 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:12:20.860 09:25:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:20.860 09:25:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:20.860 09:25:14 -- common/autotest_common.sh@10 -- # set +x 00:12:20.860 ************************************ 00:12:20.860 START TEST spdkcli_tcp 00:12:20.860 ************************************ 00:12:20.860 09:25:14 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:12:20.860 * Looking for test storage... 00:12:20.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:12:20.860 09:25:14 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:12:20.860 09:25:14 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:12:20.860 09:25:14 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:12:20.860 09:25:14 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:12:20.860 09:25:14 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:12:20.860 09:25:14 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:20.860 09:25:14 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:12:20.860 09:25:14 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:20.860 09:25:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:20.860 09:25:14 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=135920 00:12:20.860 09:25:14 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 135920 00:12:20.860 09:25:14 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:12:20.860 09:25:14 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 135920 ']' 00:12:20.860 09:25:14 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.860 09:25:14 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:20.860 09:25:14 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.860 09:25:14 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:20.860 09:25:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:21.120 [2024-05-16 09:25:14.467189] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:21.120 [2024-05-16 09:25:14.467262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135920 ] 00:12:21.120 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.120 [2024-05-16 09:25:14.545756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:21.120 [2024-05-16 09:25:14.608493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.121 [2024-05-16 09:25:14.608494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.692 09:25:15 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:21.692 09:25:15 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:12:21.692 09:25:15 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=135971 00:12:21.953 09:25:15 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:12:21.953 09:25:15 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:12:21.953 [ 00:12:21.953 "bdev_malloc_delete", 00:12:21.953 "bdev_malloc_create", 00:12:21.953 "bdev_null_resize", 00:12:21.953 "bdev_null_delete", 00:12:21.953 "bdev_null_create", 00:12:21.953 "bdev_nvme_cuse_unregister", 00:12:21.953 "bdev_nvme_cuse_register", 00:12:21.953 "bdev_opal_new_user", 00:12:21.953 "bdev_opal_set_lock_state", 00:12:21.953 "bdev_opal_delete", 00:12:21.953 "bdev_opal_get_info", 00:12:21.953 "bdev_opal_create", 00:12:21.953 "bdev_nvme_opal_revert", 00:12:21.953 "bdev_nvme_opal_init", 00:12:21.953 "bdev_nvme_send_cmd", 00:12:21.953 "bdev_nvme_get_path_iostat", 00:12:21.953 "bdev_nvme_get_mdns_discovery_info", 00:12:21.953 "bdev_nvme_stop_mdns_discovery", 00:12:21.954 "bdev_nvme_start_mdns_discovery", 00:12:21.954 "bdev_nvme_set_multipath_policy", 00:12:21.954 "bdev_nvme_set_preferred_path", 00:12:21.954 "bdev_nvme_get_io_paths", 00:12:21.954 "bdev_nvme_remove_error_injection", 00:12:21.954 "bdev_nvme_add_error_injection", 00:12:21.954 "bdev_nvme_get_discovery_info", 00:12:21.954 "bdev_nvme_stop_discovery", 00:12:21.954 "bdev_nvme_start_discovery", 00:12:21.954 "bdev_nvme_get_controller_health_info", 00:12:21.954 "bdev_nvme_disable_controller", 00:12:21.954 "bdev_nvme_enable_controller", 00:12:21.954 "bdev_nvme_reset_controller", 00:12:21.954 "bdev_nvme_get_transport_statistics", 00:12:21.954 "bdev_nvme_apply_firmware", 00:12:21.954 "bdev_nvme_detach_controller", 00:12:21.954 "bdev_nvme_get_controllers", 00:12:21.954 "bdev_nvme_attach_controller", 00:12:21.954 "bdev_nvme_set_hotplug", 00:12:21.954 "bdev_nvme_set_options", 00:12:21.954 "bdev_passthru_delete", 00:12:21.954 "bdev_passthru_create", 00:12:21.954 "bdev_lvol_set_parent_bdev", 00:12:21.954 "bdev_lvol_set_parent", 00:12:21.954 "bdev_lvol_check_shallow_copy", 00:12:21.954 "bdev_lvol_start_shallow_copy", 00:12:21.954 "bdev_lvol_grow_lvstore", 00:12:21.954 "bdev_lvol_get_lvols", 00:12:21.954 "bdev_lvol_get_lvstores", 00:12:21.954 "bdev_lvol_delete", 00:12:21.954 "bdev_lvol_set_read_only", 00:12:21.954 "bdev_lvol_resize", 00:12:21.954 "bdev_lvol_decouple_parent", 00:12:21.954 "bdev_lvol_inflate", 00:12:21.954 "bdev_lvol_rename", 00:12:21.954 "bdev_lvol_clone_bdev", 00:12:21.954 "bdev_lvol_clone", 00:12:21.954 "bdev_lvol_snapshot", 00:12:21.954 "bdev_lvol_create", 00:12:21.954 "bdev_lvol_delete_lvstore", 00:12:21.954 "bdev_lvol_rename_lvstore", 00:12:21.954 "bdev_lvol_create_lvstore", 00:12:21.954 "bdev_raid_set_options", 00:12:21.954 "bdev_raid_remove_base_bdev", 00:12:21.954 "bdev_raid_add_base_bdev", 00:12:21.954 "bdev_raid_delete", 00:12:21.954 "bdev_raid_create", 00:12:21.954 "bdev_raid_get_bdevs", 00:12:21.954 "bdev_error_inject_error", 00:12:21.954 "bdev_error_delete", 00:12:21.954 "bdev_error_create", 00:12:21.954 "bdev_split_delete", 00:12:21.954 "bdev_split_create", 00:12:21.954 "bdev_delay_delete", 00:12:21.954 "bdev_delay_create", 00:12:21.954 "bdev_delay_update_latency", 00:12:21.954 "bdev_zone_block_delete", 00:12:21.954 "bdev_zone_block_create", 00:12:21.954 "blobfs_create", 00:12:21.954 "blobfs_detect", 00:12:21.954 "blobfs_set_cache_size", 00:12:21.954 "bdev_aio_delete", 00:12:21.954 "bdev_aio_rescan", 00:12:21.954 "bdev_aio_create", 00:12:21.954 "bdev_ftl_set_property", 00:12:21.954 "bdev_ftl_get_properties", 00:12:21.954 "bdev_ftl_get_stats", 00:12:21.954 "bdev_ftl_unmap", 00:12:21.954 "bdev_ftl_unload", 00:12:21.954 "bdev_ftl_delete", 00:12:21.954 "bdev_ftl_load", 00:12:21.954 "bdev_ftl_create", 00:12:21.954 "bdev_virtio_attach_controller", 00:12:21.954 "bdev_virtio_scsi_get_devices", 00:12:21.954 "bdev_virtio_detach_controller", 00:12:21.954 "bdev_virtio_blk_set_hotplug", 00:12:21.954 "bdev_iscsi_delete", 00:12:21.954 "bdev_iscsi_create", 00:12:21.954 "bdev_iscsi_set_options", 00:12:21.954 "accel_error_inject_error", 00:12:21.954 "ioat_scan_accel_module", 00:12:21.954 "dsa_scan_accel_module", 00:12:21.954 "iaa_scan_accel_module", 00:12:21.954 "vfu_virtio_create_scsi_endpoint", 00:12:21.954 "vfu_virtio_scsi_remove_target", 00:12:21.954 "vfu_virtio_scsi_add_target", 00:12:21.954 "vfu_virtio_create_blk_endpoint", 00:12:21.954 "vfu_virtio_delete_endpoint", 00:12:21.954 "keyring_file_remove_key", 00:12:21.954 "keyring_file_add_key", 00:12:21.954 "iscsi_get_histogram", 00:12:21.954 "iscsi_enable_histogram", 00:12:21.954 "iscsi_set_options", 00:12:21.954 "iscsi_get_auth_groups", 00:12:21.954 "iscsi_auth_group_remove_secret", 00:12:21.954 "iscsi_auth_group_add_secret", 00:12:21.954 "iscsi_delete_auth_group", 00:12:21.954 "iscsi_create_auth_group", 00:12:21.954 "iscsi_set_discovery_auth", 00:12:21.954 "iscsi_get_options", 00:12:21.954 "iscsi_target_node_request_logout", 00:12:21.954 "iscsi_target_node_set_redirect", 00:12:21.954 "iscsi_target_node_set_auth", 00:12:21.954 "iscsi_target_node_add_lun", 00:12:21.954 "iscsi_get_stats", 00:12:21.954 "iscsi_get_connections", 00:12:21.954 "iscsi_portal_group_set_auth", 00:12:21.954 "iscsi_start_portal_group", 00:12:21.954 "iscsi_delete_portal_group", 00:12:21.954 "iscsi_create_portal_group", 00:12:21.954 "iscsi_get_portal_groups", 00:12:21.954 "iscsi_delete_target_node", 00:12:21.954 "iscsi_target_node_remove_pg_ig_maps", 00:12:21.954 "iscsi_target_node_add_pg_ig_maps", 00:12:21.954 "iscsi_create_target_node", 00:12:21.954 "iscsi_get_target_nodes", 00:12:21.954 "iscsi_delete_initiator_group", 00:12:21.954 "iscsi_initiator_group_remove_initiators", 00:12:21.954 "iscsi_initiator_group_add_initiators", 00:12:21.954 "iscsi_create_initiator_group", 00:12:21.954 "iscsi_get_initiator_groups", 00:12:21.954 "nvmf_set_crdt", 00:12:21.954 "nvmf_set_config", 00:12:21.954 "nvmf_set_max_subsystems", 00:12:21.954 "nvmf_stop_mdns_prr", 00:12:21.954 "nvmf_publish_mdns_prr", 00:12:21.954 "nvmf_subsystem_get_listeners", 00:12:21.954 "nvmf_subsystem_get_qpairs", 00:12:21.954 "nvmf_subsystem_get_controllers", 00:12:21.954 "nvmf_get_stats", 00:12:21.954 "nvmf_get_transports", 00:12:21.954 "nvmf_create_transport", 00:12:21.954 "nvmf_get_targets", 00:12:21.954 "nvmf_delete_target", 00:12:21.954 "nvmf_create_target", 00:12:21.954 "nvmf_subsystem_allow_any_host", 00:12:21.954 "nvmf_subsystem_remove_host", 00:12:21.954 "nvmf_subsystem_add_host", 00:12:21.954 "nvmf_ns_remove_host", 00:12:21.954 "nvmf_ns_add_host", 00:12:21.954 "nvmf_subsystem_remove_ns", 00:12:21.954 "nvmf_subsystem_add_ns", 00:12:21.954 "nvmf_subsystem_listener_set_ana_state", 00:12:21.954 "nvmf_discovery_get_referrals", 00:12:21.954 "nvmf_discovery_remove_referral", 00:12:21.954 "nvmf_discovery_add_referral", 00:12:21.954 "nvmf_subsystem_remove_listener", 00:12:21.954 "nvmf_subsystem_add_listener", 00:12:21.954 "nvmf_delete_subsystem", 00:12:21.954 "nvmf_create_subsystem", 00:12:21.954 "nvmf_get_subsystems", 00:12:21.954 "env_dpdk_get_mem_stats", 00:12:21.954 "nbd_get_disks", 00:12:21.954 "nbd_stop_disk", 00:12:21.954 "nbd_start_disk", 00:12:21.954 "ublk_recover_disk", 00:12:21.954 "ublk_get_disks", 00:12:21.954 "ublk_stop_disk", 00:12:21.954 "ublk_start_disk", 00:12:21.954 "ublk_destroy_target", 00:12:21.954 "ublk_create_target", 00:12:21.954 "virtio_blk_create_transport", 00:12:21.954 "virtio_blk_get_transports", 00:12:21.954 "vhost_controller_set_coalescing", 00:12:21.954 "vhost_get_controllers", 00:12:21.954 "vhost_delete_controller", 00:12:21.954 "vhost_create_blk_controller", 00:12:21.954 "vhost_scsi_controller_remove_target", 00:12:21.954 "vhost_scsi_controller_add_target", 00:12:21.954 "vhost_start_scsi_controller", 00:12:21.954 "vhost_create_scsi_controller", 00:12:21.954 "thread_set_cpumask", 00:12:21.954 "framework_get_scheduler", 00:12:21.954 "framework_set_scheduler", 00:12:21.954 "framework_get_reactors", 00:12:21.954 "thread_get_io_channels", 00:12:21.954 "thread_get_pollers", 00:12:21.954 "thread_get_stats", 00:12:21.954 "framework_monitor_context_switch", 00:12:21.954 "spdk_kill_instance", 00:12:21.954 "log_enable_timestamps", 00:12:21.954 "log_get_flags", 00:12:21.954 "log_clear_flag", 00:12:21.954 "log_set_flag", 00:12:21.954 "log_get_level", 00:12:21.954 "log_set_level", 00:12:21.954 "log_get_print_level", 00:12:21.954 "log_set_print_level", 00:12:21.954 "framework_enable_cpumask_locks", 00:12:21.954 "framework_disable_cpumask_locks", 00:12:21.954 "framework_wait_init", 00:12:21.954 "framework_start_init", 00:12:21.954 "scsi_get_devices", 00:12:21.954 "bdev_get_histogram", 00:12:21.954 "bdev_enable_histogram", 00:12:21.954 "bdev_set_qos_limit", 00:12:21.954 "bdev_set_qd_sampling_period", 00:12:21.954 "bdev_get_bdevs", 00:12:21.954 "bdev_reset_iostat", 00:12:21.954 "bdev_get_iostat", 00:12:21.954 "bdev_examine", 00:12:21.954 "bdev_wait_for_examine", 00:12:21.954 "bdev_set_options", 00:12:21.954 "notify_get_notifications", 00:12:21.954 "notify_get_types", 00:12:21.954 "accel_get_stats", 00:12:21.954 "accel_set_options", 00:12:21.954 "accel_set_driver", 00:12:21.954 "accel_crypto_key_destroy", 00:12:21.954 "accel_crypto_keys_get", 00:12:21.954 "accel_crypto_key_create", 00:12:21.954 "accel_assign_opc", 00:12:21.954 "accel_get_module_info", 00:12:21.954 "accel_get_opc_assignments", 00:12:21.954 "vmd_rescan", 00:12:21.954 "vmd_remove_device", 00:12:21.954 "vmd_enable", 00:12:21.954 "sock_get_default_impl", 00:12:21.954 "sock_set_default_impl", 00:12:21.954 "sock_impl_set_options", 00:12:21.954 "sock_impl_get_options", 00:12:21.954 "iobuf_get_stats", 00:12:21.954 "iobuf_set_options", 00:12:21.954 "keyring_get_keys", 00:12:21.954 "framework_get_pci_devices", 00:12:21.954 "framework_get_config", 00:12:21.954 "framework_get_subsystems", 00:12:21.954 "vfu_tgt_set_base_path", 00:12:21.954 "trace_get_info", 00:12:21.954 "trace_get_tpoint_group_mask", 00:12:21.954 "trace_disable_tpoint_group", 00:12:21.954 "trace_enable_tpoint_group", 00:12:21.954 "trace_clear_tpoint_mask", 00:12:21.954 "trace_set_tpoint_mask", 00:12:21.954 "spdk_get_version", 00:12:21.954 "rpc_get_methods" 00:12:21.954 ] 00:12:21.954 09:25:15 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:12:21.954 09:25:15 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:21.954 09:25:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:21.954 09:25:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:21.954 09:25:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 135920 00:12:21.954 09:25:15 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 135920 ']' 00:12:21.954 09:25:15 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 135920 00:12:21.954 09:25:15 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:12:21.954 09:25:15 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:21.954 09:25:15 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 135920 00:12:21.955 09:25:15 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:21.955 09:25:15 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:21.955 09:25:15 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 135920' 00:12:21.955 killing process with pid 135920 00:12:21.955 09:25:15 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 135920 00:12:21.955 09:25:15 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 135920 00:12:22.215 00:12:22.215 real 0m1.384s 00:12:22.215 user 0m2.546s 00:12:22.215 sys 0m0.429s 00:12:22.215 09:25:15 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:22.215 09:25:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:22.215 ************************************ 00:12:22.215 END TEST spdkcli_tcp 00:12:22.215 ************************************ 00:12:22.215 09:25:15 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:22.215 09:25:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:22.215 09:25:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:22.215 09:25:15 -- common/autotest_common.sh@10 -- # set +x 00:12:22.215 ************************************ 00:12:22.215 START TEST dpdk_mem_utility 00:12:22.215 ************************************ 00:12:22.215 09:25:15 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:22.477 * Looking for test storage... 00:12:22.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:12:22.477 09:25:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:12:22.477 09:25:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=136208 00:12:22.477 09:25:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 136208 00:12:22.477 09:25:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:12:22.477 09:25:15 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 136208 ']' 00:12:22.477 09:25:15 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.477 09:25:15 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:22.477 09:25:15 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.477 09:25:15 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:22.477 09:25:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:22.477 [2024-05-16 09:25:15.927757] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:22.477 [2024-05-16 09:25:15.927832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136208 ] 00:12:22.477 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.477 [2024-05-16 09:25:16.006359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.737 [2024-05-16 09:25:16.077663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.311 09:25:16 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:23.311 09:25:16 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:12:23.311 09:25:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:12:23.311 09:25:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:12:23.311 09:25:16 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.311 09:25:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:23.311 { 00:12:23.311 "filename": "/tmp/spdk_mem_dump.txt" 00:12:23.311 } 00:12:23.311 09:25:16 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.311 09:25:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:12:23.311 DPDK memory size 814.000000 MiB in 1 heap(s) 00:12:23.311 1 heaps totaling size 814.000000 MiB 00:12:23.311 size: 814.000000 MiB heap id: 0 00:12:23.311 end heaps---------- 00:12:23.311 8 mempools totaling size 598.116089 MiB 00:12:23.311 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:12:23.311 size: 158.602051 MiB name: PDU_data_out_Pool 00:12:23.311 size: 84.521057 MiB name: bdev_io_136208 00:12:23.311 size: 51.011292 MiB name: evtpool_136208 00:12:23.311 size: 50.003479 MiB name: msgpool_136208 00:12:23.311 size: 21.763794 MiB name: PDU_Pool 00:12:23.311 size: 19.513306 MiB name: SCSI_TASK_Pool 00:12:23.311 size: 0.026123 MiB name: Session_Pool 00:12:23.311 end mempools------- 00:12:23.311 6 memzones totaling size 4.142822 MiB 00:12:23.311 size: 1.000366 MiB name: RG_ring_0_136208 00:12:23.311 size: 1.000366 MiB name: RG_ring_1_136208 00:12:23.311 size: 1.000366 MiB name: RG_ring_4_136208 00:12:23.311 size: 1.000366 MiB name: RG_ring_5_136208 00:12:23.311 size: 0.125366 MiB name: RG_ring_2_136208 00:12:23.311 size: 0.015991 MiB name: RG_ring_3_136208 00:12:23.311 end memzones------- 00:12:23.311 09:25:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:12:23.311 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:12:23.311 list of free elements. size: 12.519348 MiB 00:12:23.311 element at address: 0x200000400000 with size: 1.999512 MiB 00:12:23.311 element at address: 0x200018e00000 with size: 0.999878 MiB 00:12:23.311 element at address: 0x200019000000 with size: 0.999878 MiB 00:12:23.311 element at address: 0x200003e00000 with size: 0.996277 MiB 00:12:23.311 element at address: 0x200031c00000 with size: 0.994446 MiB 00:12:23.311 element at address: 0x200013800000 with size: 0.978699 MiB 00:12:23.311 element at address: 0x200007000000 with size: 0.959839 MiB 00:12:23.311 element at address: 0x200019200000 with size: 0.936584 MiB 00:12:23.311 element at address: 0x200000200000 with size: 0.841614 MiB 00:12:23.311 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:12:23.311 element at address: 0x20000b200000 with size: 0.490723 MiB 00:12:23.311 element at address: 0x200000800000 with size: 0.487793 MiB 00:12:23.311 element at address: 0x200019400000 with size: 0.485657 MiB 00:12:23.311 element at address: 0x200027e00000 with size: 0.410034 MiB 00:12:23.311 element at address: 0x200003a00000 with size: 0.355530 MiB 00:12:23.311 list of standard malloc elements. size: 199.218079 MiB 00:12:23.311 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:12:23.311 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:12:23.311 element at address: 0x200018efff80 with size: 1.000122 MiB 00:12:23.311 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:12:23.311 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:12:23.311 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:12:23.311 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:12:23.311 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:12:23.311 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:12:23.311 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:12:23.311 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:12:23.311 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:12:23.311 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:12:23.311 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:12:23.311 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:12:23.311 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:12:23.311 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:12:23.311 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:12:23.311 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:12:23.311 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:12:23.311 element at address: 0x200003adb300 with size: 0.000183 MiB 00:12:23.311 element at address: 0x200003adb500 with size: 0.000183 MiB 00:12:23.311 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:12:23.311 element at address: 0x200003affa80 with size: 0.000183 MiB 00:12:23.311 element at address: 0x200003affb40 with size: 0.000183 MiB 00:12:23.311 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:12:23.311 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:12:23.311 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:12:23.311 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:12:23.311 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:12:23.311 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:12:23.311 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:12:23.311 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:12:23.311 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:12:23.311 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:12:23.311 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:12:23.311 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:12:23.311 element at address: 0x200027e69040 with size: 0.000183 MiB 00:12:23.311 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:12:23.311 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:12:23.311 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:12:23.311 list of memzone associated elements. size: 602.262573 MiB 00:12:23.311 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:12:23.311 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:12:23.311 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:12:23.311 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:12:23.311 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:12:23.311 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_136208_0 00:12:23.311 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:12:23.311 associated memzone info: size: 48.002930 MiB name: MP_evtpool_136208_0 00:12:23.311 element at address: 0x200003fff380 with size: 48.003052 MiB 00:12:23.311 associated memzone info: size: 48.002930 MiB name: MP_msgpool_136208_0 00:12:23.311 element at address: 0x2000195be940 with size: 20.255554 MiB 00:12:23.311 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:12:23.311 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:12:23.311 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:12:23.311 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:12:23.311 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_136208 00:12:23.311 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:12:23.311 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_136208 00:12:23.311 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:12:23.311 associated memzone info: size: 1.007996 MiB name: MP_evtpool_136208 00:12:23.311 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:12:23.311 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:12:23.311 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:12:23.311 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:12:23.311 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:12:23.311 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:12:23.311 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:12:23.311 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:12:23.311 element at address: 0x200003eff180 with size: 1.000488 MiB 00:12:23.311 associated memzone info: size: 1.000366 MiB name: RG_ring_0_136208 00:12:23.311 element at address: 0x200003affc00 with size: 1.000488 MiB 00:12:23.311 associated memzone info: size: 1.000366 MiB name: RG_ring_1_136208 00:12:23.311 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:12:23.311 associated memzone info: size: 1.000366 MiB name: RG_ring_4_136208 00:12:23.311 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:12:23.311 associated memzone info: size: 1.000366 MiB name: RG_ring_5_136208 00:12:23.311 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:12:23.311 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_136208 00:12:23.311 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:12:23.311 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:12:23.311 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:12:23.311 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:12:23.311 element at address: 0x20001947c540 with size: 0.250488 MiB 00:12:23.311 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:12:23.311 element at address: 0x200003adf880 with size: 0.125488 MiB 00:12:23.311 associated memzone info: size: 0.125366 MiB name: RG_ring_2_136208 00:12:23.311 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:12:23.311 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:12:23.311 element at address: 0x200027e69100 with size: 0.023743 MiB 00:12:23.311 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:12:23.311 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:12:23.311 associated memzone info: size: 0.015991 MiB name: RG_ring_3_136208 00:12:23.311 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:12:23.311 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:12:23.311 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:12:23.311 associated memzone info: size: 0.000183 MiB name: MP_msgpool_136208 00:12:23.311 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:12:23.311 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_136208 00:12:23.311 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:12:23.312 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:12:23.312 09:25:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:12:23.312 09:25:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 136208 00:12:23.312 09:25:16 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 136208 ']' 00:12:23.312 09:25:16 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 136208 00:12:23.312 09:25:16 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:12:23.312 09:25:16 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:23.312 09:25:16 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 136208 00:12:23.312 09:25:16 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:23.312 09:25:16 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:23.312 09:25:16 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 136208' 00:12:23.312 killing process with pid 136208 00:12:23.312 09:25:16 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 136208 00:12:23.312 09:25:16 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 136208 00:12:23.573 00:12:23.573 real 0m1.282s 00:12:23.573 user 0m1.365s 00:12:23.573 sys 0m0.384s 00:12:23.573 09:25:17 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:23.573 09:25:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:23.573 ************************************ 00:12:23.573 END TEST dpdk_mem_utility 00:12:23.573 ************************************ 00:12:23.573 09:25:17 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:12:23.573 09:25:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:23.573 09:25:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:23.573 09:25:17 -- common/autotest_common.sh@10 -- # set +x 00:12:23.573 ************************************ 00:12:23.573 START TEST event 00:12:23.573 ************************************ 00:12:23.573 09:25:17 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:12:23.834 * Looking for test storage... 00:12:23.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:12:23.834 09:25:17 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:12:23.834 09:25:17 event -- bdev/nbd_common.sh@6 -- # set -e 00:12:23.834 09:25:17 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:23.834 09:25:17 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:12:23.834 09:25:17 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:23.834 09:25:17 event -- common/autotest_common.sh@10 -- # set +x 00:12:23.834 ************************************ 00:12:23.834 START TEST event_perf 00:12:23.834 ************************************ 00:12:23.834 09:25:17 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:23.834 Running I/O for 1 seconds...[2024-05-16 09:25:17.302863] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:23.834 [2024-05-16 09:25:17.302984] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136457 ] 00:12:23.834 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.095 [2024-05-16 09:25:17.394795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.095 [2024-05-16 09:25:17.471905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.095 [2024-05-16 09:25:17.471935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.095 [2024-05-16 09:25:17.472076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.095 Running I/O for 1 seconds...[2024-05-16 09:25:17.472078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.038 00:12:25.038 lcore 0: 167833 00:12:25.038 lcore 1: 167835 00:12:25.038 lcore 2: 167837 00:12:25.038 lcore 3: 167836 00:12:25.038 done. 00:12:25.038 00:12:25.038 real 0m1.235s 00:12:25.038 user 0m4.126s 00:12:25.038 sys 0m0.104s 00:12:25.038 09:25:18 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:25.038 09:25:18 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:12:25.038 ************************************ 00:12:25.038 END TEST event_perf 00:12:25.038 ************************************ 00:12:25.038 09:25:18 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:12:25.038 09:25:18 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:25.038 09:25:18 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:25.038 09:25:18 event -- common/autotest_common.sh@10 -- # set +x 00:12:25.038 ************************************ 00:12:25.038 START TEST event_reactor 00:12:25.038 ************************************ 00:12:25.038 09:25:18 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:12:25.299 [2024-05-16 09:25:18.613628] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:25.299 [2024-05-16 09:25:18.613731] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136791 ] 00:12:25.299 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.299 [2024-05-16 09:25:18.690874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.299 [2024-05-16 09:25:18.750483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.257 test_start 00:12:26.257 oneshot 00:12:26.257 tick 100 00:12:26.257 tick 100 00:12:26.257 tick 250 00:12:26.257 tick 100 00:12:26.257 tick 100 00:12:26.257 tick 100 00:12:26.257 tick 250 00:12:26.257 tick 500 00:12:26.257 tick 100 00:12:26.257 tick 100 00:12:26.257 tick 250 00:12:26.257 tick 100 00:12:26.257 tick 100 00:12:26.257 test_end 00:12:26.257 00:12:26.258 real 0m1.200s 00:12:26.258 user 0m1.117s 00:12:26.258 sys 0m0.078s 00:12:26.258 09:25:19 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:26.258 09:25:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:12:26.258 ************************************ 00:12:26.258 END TEST event_reactor 00:12:26.258 ************************************ 00:12:26.518 09:25:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:26.518 09:25:19 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:26.518 09:25:19 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:26.518 09:25:19 event -- common/autotest_common.sh@10 -- # set +x 00:12:26.518 ************************************ 00:12:26.518 START TEST event_reactor_perf 00:12:26.518 ************************************ 00:12:26.518 09:25:19 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:26.518 [2024-05-16 09:25:19.890600] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:26.518 [2024-05-16 09:25:19.890698] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137141 ] 00:12:26.518 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.518 [2024-05-16 09:25:19.970459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.518 [2024-05-16 09:25:20.028337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.906 test_start 00:12:27.906 test_end 00:12:27.906 Performance: 530341 events per second 00:12:27.906 00:12:27.906 real 0m1.202s 00:12:27.906 user 0m1.124s 00:12:27.906 sys 0m0.074s 00:12:27.906 09:25:21 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:27.906 09:25:21 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:12:27.906 ************************************ 00:12:27.906 END TEST event_reactor_perf 00:12:27.906 ************************************ 00:12:27.906 09:25:21 event -- event/event.sh@49 -- # uname -s 00:12:27.907 09:25:21 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:12:27.907 09:25:21 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:12:27.907 09:25:21 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:27.907 09:25:21 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:27.907 09:25:21 event -- common/autotest_common.sh@10 -- # set +x 00:12:27.907 ************************************ 00:12:27.907 START TEST event_scheduler 00:12:27.907 ************************************ 00:12:27.907 09:25:21 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:12:27.907 * Looking for test storage... 00:12:27.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:12:27.907 09:25:21 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:12:27.907 09:25:21 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=137522 00:12:27.907 09:25:21 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:12:27.907 09:25:21 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:12:27.907 09:25:21 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 137522 00:12:27.907 09:25:21 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 137522 ']' 00:12:27.907 09:25:21 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.907 09:25:21 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:27.907 09:25:21 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.907 09:25:21 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:27.907 09:25:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:27.907 [2024-05-16 09:25:21.305080] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:27.907 [2024-05-16 09:25:21.305148] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137522 ] 00:12:27.907 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.907 [2024-05-16 09:25:21.385524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.168 [2024-05-16 09:25:21.480484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.168 [2024-05-16 09:25:21.480645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.168 [2024-05-16 09:25:21.480805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.168 [2024-05-16 09:25:21.480806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.741 09:25:22 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:28.741 09:25:22 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:12:28.741 09:25:22 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:12:28.741 09:25:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.741 09:25:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:28.741 POWER: Env isn't set yet! 00:12:28.741 POWER: Attempting to initialise ACPI cpufreq power management... 00:12:28.741 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:28.741 POWER: Cannot set governor of lcore 0 to userspace 00:12:28.741 POWER: Attempting to initialise PSTAT power management... 00:12:28.741 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:12:28.741 POWER: Initialized successfully for lcore 0 power management 00:12:28.741 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:12:28.741 POWER: Initialized successfully for lcore 1 power management 00:12:28.741 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:12:28.741 POWER: Initialized successfully for lcore 2 power management 00:12:28.741 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:12:28.741 POWER: Initialized successfully for lcore 3 power management 00:12:28.741 09:25:22 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.741 09:25:22 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:12:28.741 09:25:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.741 09:25:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:28.741 [2024-05-16 09:25:22.192866] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:12:28.741 09:25:22 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.741 09:25:22 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:12:28.741 09:25:22 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:28.741 09:25:22 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:28.741 09:25:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:28.741 ************************************ 00:12:28.741 START TEST scheduler_create_thread 00:12:28.741 ************************************ 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:28.741 2 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:28.741 3 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:28.741 4 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:28.741 5 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.741 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:29.003 6 00:12:29.003 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.003 09:25:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:12:29.003 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.003 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:29.003 7 00:12:29.003 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.003 09:25:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:12:29.003 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.003 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:29.003 8 00:12:29.003 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.003 09:25:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:12:29.003 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.003 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:29.265 9 00:12:29.265 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.265 09:25:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:12:29.266 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.266 09:25:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:30.651 10 00:12:30.651 09:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.651 09:25:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:12:30.651 09:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.651 09:25:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:31.595 09:25:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.595 09:25:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:12:31.595 09:25:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:12:31.595 09:25:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.595 09:25:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:32.167 09:25:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.167 09:25:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:12:32.167 09:25:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.167 09:25:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:32.740 09:25:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.740 09:25:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:12:32.740 09:25:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:12:32.740 09:25:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.740 09:25:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:33.312 09:25:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.312 00:12:33.312 real 0m4.464s 00:12:33.312 user 0m0.025s 00:12:33.312 sys 0m0.006s 00:12:33.312 09:25:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:33.312 09:25:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:33.312 ************************************ 00:12:33.312 END TEST scheduler_create_thread 00:12:33.312 ************************************ 00:12:33.312 09:25:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:33.312 09:25:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 137522 00:12:33.312 09:25:26 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 137522 ']' 00:12:33.312 09:25:26 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 137522 00:12:33.312 09:25:26 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:12:33.312 09:25:26 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:33.312 09:25:26 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 137522 00:12:33.312 09:25:26 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:12:33.312 09:25:26 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:12:33.312 09:25:26 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 137522' 00:12:33.312 killing process with pid 137522 00:12:33.312 09:25:26 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 137522 00:12:33.312 09:25:26 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 137522 00:12:33.574 [2024-05-16 09:25:26.979655] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:12:33.574 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:12:33.574 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:12:33.574 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:12:33.574 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:12:33.574 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:12:33.574 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:12:33.574 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:12:33.574 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:12:33.835 00:12:33.835 real 0m5.985s 00:12:33.835 user 0m14.171s 00:12:33.835 sys 0m0.381s 00:12:33.835 09:25:27 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:33.835 09:25:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:33.835 ************************************ 00:12:33.835 END TEST event_scheduler 00:12:33.835 ************************************ 00:12:33.835 09:25:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:12:33.835 09:25:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:12:33.835 09:25:27 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:33.835 09:25:27 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:33.835 09:25:27 event -- common/autotest_common.sh@10 -- # set +x 00:12:33.835 ************************************ 00:12:33.835 START TEST app_repeat 00:12:33.835 ************************************ 00:12:33.835 09:25:27 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:12:33.835 09:25:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:33.835 09:25:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:33.835 09:25:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:12:33.835 09:25:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:33.835 09:25:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:12:33.835 09:25:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:12:33.835 09:25:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:12:33.835 09:25:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=138590 00:12:33.835 09:25:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:12:33.835 09:25:27 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:12:33.835 09:25:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 138590' 00:12:33.835 Process app_repeat pid: 138590 00:12:33.835 09:25:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:33.835 09:25:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:12:33.835 spdk_app_start Round 0 00:12:33.835 09:25:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 138590 /var/tmp/spdk-nbd.sock 00:12:33.835 09:25:27 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 138590 ']' 00:12:33.835 09:25:27 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:33.835 09:25:27 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:33.835 09:25:27 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:33.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:33.835 09:25:27 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:33.835 09:25:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:33.835 [2024-05-16 09:25:27.263497] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:33.835 [2024-05-16 09:25:27.263601] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138590 ] 00:12:33.835 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.835 [2024-05-16 09:25:27.328749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:34.096 [2024-05-16 09:25:27.398478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.096 [2024-05-16 09:25:27.398481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.669 09:25:28 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:34.669 09:25:28 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:12:34.669 09:25:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:34.669 Malloc0 00:12:34.669 09:25:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:34.930 Malloc1 00:12:34.930 09:25:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:34.930 09:25:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.930 09:25:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:34.930 09:25:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:34.930 09:25:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:34.930 09:25:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:34.930 09:25:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:34.930 09:25:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.930 09:25:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:34.930 09:25:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:34.930 09:25:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:34.930 09:25:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:34.930 09:25:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:34.930 09:25:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:34.930 09:25:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:34.930 09:25:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:35.192 /dev/nbd0 00:12:35.192 09:25:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:35.192 09:25:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:35.192 1+0 records in 00:12:35.192 1+0 records out 00:12:35.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273196 s, 15.0 MB/s 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:35.192 09:25:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.192 09:25:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:35.192 09:25:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:35.192 /dev/nbd1 00:12:35.192 09:25:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:35.192 09:25:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:35.192 1+0 records in 00:12:35.192 1+0 records out 00:12:35.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277055 s, 14.8 MB/s 00:12:35.192 09:25:28 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:35.453 09:25:28 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:35.453 09:25:28 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:35.453 09:25:28 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:35.453 09:25:28 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:35.453 { 00:12:35.453 "nbd_device": "/dev/nbd0", 00:12:35.453 "bdev_name": "Malloc0" 00:12:35.453 }, 00:12:35.453 { 00:12:35.453 "nbd_device": "/dev/nbd1", 00:12:35.453 "bdev_name": "Malloc1" 00:12:35.453 } 00:12:35.453 ]' 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:35.453 { 00:12:35.453 "nbd_device": "/dev/nbd0", 00:12:35.453 "bdev_name": "Malloc0" 00:12:35.453 }, 00:12:35.453 { 00:12:35.453 "nbd_device": "/dev/nbd1", 00:12:35.453 "bdev_name": "Malloc1" 00:12:35.453 } 00:12:35.453 ]' 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:35.453 /dev/nbd1' 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:35.453 /dev/nbd1' 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:35.453 256+0 records in 00:12:35.453 256+0 records out 00:12:35.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117417 s, 89.3 MB/s 00:12:35.453 09:25:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:35.454 09:25:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:35.454 256+0 records in 00:12:35.454 256+0 records out 00:12:35.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173125 s, 60.6 MB/s 00:12:35.454 09:25:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:35.454 09:25:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:35.717 256+0 records in 00:12:35.717 256+0 records out 00:12:35.717 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176938 s, 59.3 MB/s 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.717 09:25:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:35.979 09:25:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:35.979 09:25:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:35.979 09:25:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:35.979 09:25:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.979 09:25:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.979 09:25:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:35.979 09:25:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:35.979 09:25:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.979 09:25:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:35.979 09:25:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.979 09:25:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:36.241 09:25:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:36.241 09:25:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:36.241 09:25:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:36.241 09:25:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:36.241 09:25:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:36.241 09:25:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:36.241 09:25:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:36.241 09:25:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:36.241 09:25:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:36.241 09:25:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:36.241 09:25:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:36.241 09:25:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:36.241 09:25:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:36.241 09:25:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:36.502 [2024-05-16 09:25:29.907199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:36.502 [2024-05-16 09:25:29.970810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.502 [2024-05-16 09:25:29.970813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.502 [2024-05-16 09:25:30.003014] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:36.502 [2024-05-16 09:25:30.003050] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:39.818 09:25:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:39.818 09:25:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:12:39.818 spdk_app_start Round 1 00:12:39.818 09:25:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 138590 /var/tmp/spdk-nbd.sock 00:12:39.818 09:25:32 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 138590 ']' 00:12:39.818 09:25:32 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:39.818 09:25:32 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:39.818 09:25:32 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:39.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:39.818 09:25:32 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:39.818 09:25:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:39.818 09:25:32 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:39.818 09:25:32 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:12:39.818 09:25:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:39.818 Malloc0 00:12:39.818 09:25:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:39.818 Malloc1 00:12:39.818 09:25:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:39.818 09:25:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:39.818 09:25:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:39.818 09:25:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:39.818 09:25:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:39.818 09:25:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:39.818 09:25:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:39.818 09:25:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:39.818 09:25:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:39.818 09:25:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:39.818 09:25:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:39.818 09:25:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:39.818 09:25:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:39.818 09:25:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:39.818 09:25:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:39.818 09:25:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:40.079 /dev/nbd0 00:12:40.079 09:25:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:40.079 09:25:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:40.079 1+0 records in 00:12:40.079 1+0 records out 00:12:40.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213628 s, 19.2 MB/s 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:40.079 09:25:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:40.079 09:25:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:40.079 09:25:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:40.079 /dev/nbd1 00:12:40.079 09:25:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:40.079 09:25:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:40.079 1+0 records in 00:12:40.079 1+0 records out 00:12:40.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225446 s, 18.2 MB/s 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:40.079 09:25:33 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:40.079 09:25:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:40.079 09:25:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:40.079 09:25:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:40.079 09:25:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:40.079 09:25:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:40.340 { 00:12:40.340 "nbd_device": "/dev/nbd0", 00:12:40.340 "bdev_name": "Malloc0" 00:12:40.340 }, 00:12:40.340 { 00:12:40.340 "nbd_device": "/dev/nbd1", 00:12:40.340 "bdev_name": "Malloc1" 00:12:40.340 } 00:12:40.340 ]' 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:40.340 { 00:12:40.340 "nbd_device": "/dev/nbd0", 00:12:40.340 "bdev_name": "Malloc0" 00:12:40.340 }, 00:12:40.340 { 00:12:40.340 "nbd_device": "/dev/nbd1", 00:12:40.340 "bdev_name": "Malloc1" 00:12:40.340 } 00:12:40.340 ]' 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:40.340 /dev/nbd1' 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:40.340 /dev/nbd1' 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:40.340 256+0 records in 00:12:40.340 256+0 records out 00:12:40.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116143 s, 90.3 MB/s 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:40.340 256+0 records in 00:12:40.340 256+0 records out 00:12:40.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174825 s, 60.0 MB/s 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:40.340 09:25:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:40.340 256+0 records in 00:12:40.340 256+0 records out 00:12:40.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177019 s, 59.2 MB/s 00:12:40.601 09:25:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:40.601 09:25:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:40.601 09:25:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:40.601 09:25:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:40.601 09:25:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:12:40.601 09:25:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:40.601 09:25:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:40.602 09:25:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:40.602 09:25:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:12:40.602 09:25:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:40.602 09:25:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:12:40.602 09:25:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:12:40.602 09:25:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:40.602 09:25:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:40.602 09:25:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:40.602 09:25:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:40.602 09:25:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:40.602 09:25:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:40.602 09:25:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:40.602 09:25:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:40.602 09:25:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:40.602 09:25:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:40.602 09:25:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:40.602 09:25:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:40.602 09:25:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:40.602 09:25:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:40.602 09:25:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:40.602 09:25:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:40.602 09:25:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:40.862 09:25:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:40.862 09:25:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:40.862 09:25:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:40.862 09:25:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:40.862 09:25:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:40.862 09:25:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:40.862 09:25:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:40.862 09:25:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:40.862 09:25:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:40.862 09:25:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:40.862 09:25:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:41.122 09:25:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:41.122 09:25:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:41.122 09:25:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:41.122 09:25:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:41.122 09:25:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:41.122 09:25:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:41.122 09:25:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:41.122 09:25:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:41.122 09:25:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:41.122 09:25:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:41.122 09:25:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:41.122 09:25:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:41.122 09:25:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:41.122 09:25:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:41.382 [2024-05-16 09:25:34.774880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:41.382 [2024-05-16 09:25:34.839104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.382 [2024-05-16 09:25:34.839126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.382 [2024-05-16 09:25:34.871549] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:41.382 [2024-05-16 09:25:34.871584] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:44.686 09:25:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:44.686 09:25:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:12:44.686 spdk_app_start Round 2 00:12:44.686 09:25:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 138590 /var/tmp/spdk-nbd.sock 00:12:44.686 09:25:37 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 138590 ']' 00:12:44.686 09:25:37 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:44.686 09:25:37 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:44.686 09:25:37 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:44.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:44.686 09:25:37 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:44.686 09:25:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:44.686 09:25:37 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:44.686 09:25:37 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:12:44.686 09:25:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:44.686 Malloc0 00:12:44.686 09:25:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:44.686 Malloc1 00:12:44.686 09:25:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:44.686 09:25:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.686 09:25:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:44.686 09:25:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:44.686 09:25:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:44.686 09:25:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:44.686 09:25:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:44.686 09:25:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.686 09:25:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:44.686 09:25:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:44.686 09:25:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:44.686 09:25:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:44.686 09:25:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:44.686 09:25:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:44.686 09:25:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:44.686 09:25:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:44.948 /dev/nbd0 00:12:44.948 09:25:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:44.948 09:25:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:44.948 1+0 records in 00:12:44.948 1+0 records out 00:12:44.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201464 s, 20.3 MB/s 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:44.948 09:25:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.948 09:25:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:44.948 09:25:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:44.948 /dev/nbd1 00:12:44.948 09:25:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:44.948 09:25:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:44.948 1+0 records in 00:12:44.948 1+0 records out 00:12:44.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018563 s, 22.1 MB/s 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:44.948 09:25:38 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:44.948 09:25:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.948 09:25:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:44.948 09:25:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:44.948 09:25:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.948 09:25:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:45.209 09:25:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:45.209 { 00:12:45.210 "nbd_device": "/dev/nbd0", 00:12:45.210 "bdev_name": "Malloc0" 00:12:45.210 }, 00:12:45.210 { 00:12:45.210 "nbd_device": "/dev/nbd1", 00:12:45.210 "bdev_name": "Malloc1" 00:12:45.210 } 00:12:45.210 ]' 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:45.210 { 00:12:45.210 "nbd_device": "/dev/nbd0", 00:12:45.210 "bdev_name": "Malloc0" 00:12:45.210 }, 00:12:45.210 { 00:12:45.210 "nbd_device": "/dev/nbd1", 00:12:45.210 "bdev_name": "Malloc1" 00:12:45.210 } 00:12:45.210 ]' 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:45.210 /dev/nbd1' 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:45.210 /dev/nbd1' 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:45.210 256+0 records in 00:12:45.210 256+0 records out 00:12:45.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118622 s, 88.4 MB/s 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:45.210 256+0 records in 00:12:45.210 256+0 records out 00:12:45.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167986 s, 62.4 MB/s 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:45.210 256+0 records in 00:12:45.210 256+0 records out 00:12:45.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189895 s, 55.2 MB/s 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:45.210 09:25:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.472 09:25:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:45.734 09:25:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:45.734 09:25:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:45.734 09:25:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:45.734 09:25:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.734 09:25:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.734 09:25:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:45.734 09:25:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:45.734 09:25:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.734 09:25:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:45.734 09:25:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:45.734 09:25:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:45.734 09:25:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:45.996 09:25:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:45.996 09:25:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:45.996 09:25:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:45.996 09:25:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:45.996 09:25:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:45.996 09:25:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:45.996 09:25:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:45.996 09:25:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:45.996 09:25:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:45.996 09:25:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:45.996 09:25:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:45.996 09:25:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:45.996 09:25:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:46.257 [2024-05-16 09:25:39.637866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:46.257 [2024-05-16 09:25:39.702111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.257 [2024-05-16 09:25:39.702304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.257 [2024-05-16 09:25:39.733914] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:46.257 [2024-05-16 09:25:39.733946] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:49.608 09:25:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 138590 /var/tmp/spdk-nbd.sock 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 138590 ']' 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:49.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:12:49.608 09:25:42 event.app_repeat -- event/event.sh@39 -- # killprocess 138590 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 138590 ']' 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 138590 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 138590 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 138590' 00:12:49.608 killing process with pid 138590 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@965 -- # kill 138590 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@970 -- # wait 138590 00:12:49.608 spdk_app_start is called in Round 0. 00:12:49.608 Shutdown signal received, stop current app iteration 00:12:49.608 Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 reinitialization... 00:12:49.608 spdk_app_start is called in Round 1. 00:12:49.608 Shutdown signal received, stop current app iteration 00:12:49.608 Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 reinitialization... 00:12:49.608 spdk_app_start is called in Round 2. 00:12:49.608 Shutdown signal received, stop current app iteration 00:12:49.608 Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 reinitialization... 00:12:49.608 spdk_app_start is called in Round 3. 00:12:49.608 Shutdown signal received, stop current app iteration 00:12:49.608 09:25:42 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:12:49.608 09:25:42 event.app_repeat -- event/event.sh@42 -- # return 0 00:12:49.608 00:12:49.608 real 0m15.601s 00:12:49.608 user 0m33.610s 00:12:49.608 sys 0m2.161s 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:49.608 09:25:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:49.608 ************************************ 00:12:49.608 END TEST app_repeat 00:12:49.608 ************************************ 00:12:49.608 09:25:42 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:12:49.608 09:25:42 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:12:49.608 09:25:42 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:49.608 09:25:42 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:49.608 09:25:42 event -- common/autotest_common.sh@10 -- # set +x 00:12:49.608 ************************************ 00:12:49.608 START TEST cpu_locks 00:12:49.608 ************************************ 00:12:49.608 09:25:42 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:12:49.608 * Looking for test storage... 00:12:49.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:12:49.608 09:25:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:12:49.608 09:25:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:12:49.608 09:25:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:12:49.608 09:25:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:12:49.608 09:25:43 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:49.608 09:25:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:49.608 09:25:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:49.608 ************************************ 00:12:49.608 START TEST default_locks 00:12:49.608 ************************************ 00:12:49.608 09:25:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:12:49.608 09:25:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=142165 00:12:49.608 09:25:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 142165 00:12:49.608 09:25:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:12:49.608 09:25:43 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 142165 ']' 00:12:49.609 09:25:43 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.609 09:25:43 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:49.609 09:25:43 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.609 09:25:43 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:49.609 09:25:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:49.609 [2024-05-16 09:25:43.114273] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:49.609 [2024-05-16 09:25:43.114323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142165 ] 00:12:49.609 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.869 [2024-05-16 09:25:43.176398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.869 [2024-05-16 09:25:43.244158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.440 09:25:43 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:50.440 09:25:43 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:12:50.440 09:25:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 142165 00:12:50.440 09:25:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 142165 00:12:50.440 09:25:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:51.011 lslocks: write error 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 142165 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 142165 ']' 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 142165 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 142165 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 142165' 00:12:51.011 killing process with pid 142165 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 142165 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 142165 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 142165 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 142165 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 142165 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 142165 ']' 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:51.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (142165) - No such process 00:12:51.011 ERROR: process (pid: 142165) is no longer running 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:51.011 09:25:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:12:51.012 09:25:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:51.012 00:12:51.012 real 0m1.503s 00:12:51.012 user 0m1.610s 00:12:51.012 sys 0m0.485s 00:12:51.012 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:51.012 09:25:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:51.012 ************************************ 00:12:51.012 END TEST default_locks 00:12:51.012 ************************************ 00:12:51.273 09:25:44 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:12:51.273 09:25:44 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:51.273 09:25:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:51.273 09:25:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:51.273 ************************************ 00:12:51.273 START TEST default_locks_via_rpc 00:12:51.273 ************************************ 00:12:51.273 09:25:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:12:51.273 09:25:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=142511 00:12:51.273 09:25:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 142511 00:12:51.273 09:25:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 142511 ']' 00:12:51.273 09:25:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:12:51.273 09:25:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.273 09:25:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:51.273 09:25:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.273 09:25:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:51.273 09:25:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.273 [2024-05-16 09:25:44.692644] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:51.273 [2024-05-16 09:25:44.692695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142511 ] 00:12:51.273 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.273 [2024-05-16 09:25:44.754190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.273 [2024-05-16 09:25:44.821938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.219 09:25:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:52.219 09:25:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:52.219 09:25:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:12:52.219 09:25:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.219 09:25:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.219 09:25:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.219 09:25:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:12:52.219 09:25:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:52.219 09:25:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:12:52.219 09:25:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:52.219 09:25:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:12:52.219 09:25:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.219 09:25:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.219 09:25:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.219 09:25:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 142511 00:12:52.219 09:25:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 142511 00:12:52.219 09:25:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:52.481 09:25:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 142511 00:12:52.481 09:25:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 142511 ']' 00:12:52.481 09:25:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 142511 00:12:52.481 09:25:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:12:52.481 09:25:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:52.481 09:25:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 142511 00:12:52.481 09:25:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:52.481 09:25:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:52.481 09:25:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 142511' 00:12:52.481 killing process with pid 142511 00:12:52.481 09:25:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 142511 00:12:52.481 09:25:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 142511 00:12:52.743 00:12:52.743 real 0m1.487s 00:12:52.743 user 0m1.578s 00:12:52.743 sys 0m0.492s 00:12:52.743 09:25:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:52.743 09:25:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.743 ************************************ 00:12:52.743 END TEST default_locks_via_rpc 00:12:52.743 ************************************ 00:12:52.743 09:25:46 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:12:52.743 09:25:46 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:52.743 09:25:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:52.743 09:25:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:52.743 ************************************ 00:12:52.743 START TEST non_locking_app_on_locked_coremask 00:12:52.743 ************************************ 00:12:52.743 09:25:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:12:52.743 09:25:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=142818 00:12:52.743 09:25:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 142818 /var/tmp/spdk.sock 00:12:52.743 09:25:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:12:52.744 09:25:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 142818 ']' 00:12:52.744 09:25:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.744 09:25:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:52.744 09:25:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.744 09:25:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:52.744 09:25:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:52.744 [2024-05-16 09:25:46.260465] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:52.744 [2024-05-16 09:25:46.260521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142818 ] 00:12:52.744 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.005 [2024-05-16 09:25:46.324357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.005 [2024-05-16 09:25:46.398885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.578 09:25:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:53.578 09:25:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:53.578 09:25:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=142913 00:12:53.578 09:25:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 142913 /var/tmp/spdk2.sock 00:12:53.578 09:25:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 142913 ']' 00:12:53.578 09:25:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:12:53.578 09:25:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:53.578 09:25:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:53.578 09:25:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:53.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:53.579 09:25:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:53.579 09:25:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:53.579 [2024-05-16 09:25:47.074397] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:53.579 [2024-05-16 09:25:47.074451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142913 ] 00:12:53.579 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.839 [2024-05-16 09:25:47.160245] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:53.839 [2024-05-16 09:25:47.160273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.839 [2024-05-16 09:25:47.294170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.413 09:25:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:54.413 09:25:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:54.413 09:25:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 142818 00:12:54.413 09:25:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 142818 00:12:54.413 09:25:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:54.985 lslocks: write error 00:12:54.985 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 142818 00:12:54.985 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 142818 ']' 00:12:54.985 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 142818 00:12:54.985 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:12:54.985 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:54.985 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 142818 00:12:54.985 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:54.985 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:54.985 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 142818' 00:12:54.985 killing process with pid 142818 00:12:54.985 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 142818 00:12:54.985 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 142818 00:12:55.557 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 142913 00:12:55.557 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 142913 ']' 00:12:55.557 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 142913 00:12:55.557 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:12:55.557 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:55.557 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 142913 00:12:55.557 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:55.557 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:55.557 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 142913' 00:12:55.557 killing process with pid 142913 00:12:55.557 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 142913 00:12:55.557 09:25:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 142913 00:12:55.557 00:12:55.557 real 0m2.909s 00:12:55.557 user 0m3.156s 00:12:55.557 sys 0m0.868s 00:12:55.557 09:25:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:55.557 09:25:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:55.557 ************************************ 00:12:55.557 END TEST non_locking_app_on_locked_coremask 00:12:55.557 ************************************ 00:12:55.819 09:25:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:12:55.819 09:25:49 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:55.819 09:25:49 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:55.819 09:25:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:55.819 ************************************ 00:12:55.819 START TEST locking_app_on_unlocked_coremask 00:12:55.819 ************************************ 00:12:55.819 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:12:55.819 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=143346 00:12:55.819 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 143346 /var/tmp/spdk.sock 00:12:55.819 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:12:55.819 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 143346 ']' 00:12:55.819 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.819 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:55.819 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.819 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:55.819 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:55.819 [2024-05-16 09:25:49.238550] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:55.819 [2024-05-16 09:25:49.238601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143346 ] 00:12:55.819 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.819 [2024-05-16 09:25:49.295711] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:55.819 [2024-05-16 09:25:49.295739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.819 [2024-05-16 09:25:49.359706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.767 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:56.767 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:56.767 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=143620 00:12:56.767 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 143620 /var/tmp/spdk2.sock 00:12:56.767 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 143620 ']' 00:12:56.767 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:56.767 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:56.767 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:56.767 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:56.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:56.767 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:56.767 09:25:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:56.767 [2024-05-16 09:25:50.044282] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:56.767 [2024-05-16 09:25:50.044343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143620 ] 00:12:56.767 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.767 [2024-05-16 09:25:50.133375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.767 [2024-05-16 09:25:50.262668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.338 09:25:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:57.338 09:25:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:57.338 09:25:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 143620 00:12:57.338 09:25:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 143620 00:12:57.338 09:25:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:57.910 lslocks: write error 00:12:57.910 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 143346 00:12:57.910 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 143346 ']' 00:12:57.910 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 143346 00:12:57.910 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:12:57.910 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:57.910 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 143346 00:12:57.910 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:57.910 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:57.910 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 143346' 00:12:57.910 killing process with pid 143346 00:12:57.910 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 143346 00:12:57.910 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 143346 00:12:58.171 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 143620 00:12:58.171 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 143620 ']' 00:12:58.171 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 143620 00:12:58.433 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:12:58.433 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:58.433 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 143620 00:12:58.433 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:58.433 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:58.433 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 143620' 00:12:58.433 killing process with pid 143620 00:12:58.433 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 143620 00:12:58.433 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 143620 00:12:58.433 00:12:58.433 real 0m2.805s 00:12:58.433 user 0m3.057s 00:12:58.433 sys 0m0.826s 00:12:58.433 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:58.433 09:25:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:58.433 ************************************ 00:12:58.433 END TEST locking_app_on_unlocked_coremask 00:12:58.433 ************************************ 00:12:58.694 09:25:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:12:58.694 09:25:52 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:58.694 09:25:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:58.694 09:25:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:58.694 ************************************ 00:12:58.694 START TEST locking_app_on_locked_coremask 00:12:58.694 ************************************ 00:12:58.694 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:12:58.694 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=143996 00:12:58.694 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 143996 /var/tmp/spdk.sock 00:12:58.694 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:12:58.694 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 143996 ']' 00:12:58.694 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.694 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:58.694 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.694 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:58.694 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:58.695 [2024-05-16 09:25:52.124073] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:58.695 [2024-05-16 09:25:52.124127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143996 ] 00:12:58.695 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.695 [2024-05-16 09:25:52.185783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.955 [2024-05-16 09:25:52.258007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.528 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:59.529 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:59.529 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=144253 00:12:59.529 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 144253 /var/tmp/spdk2.sock 00:12:59.529 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:12:59.529 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:59.529 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 144253 /var/tmp/spdk2.sock 00:12:59.529 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:12:59.529 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:59.529 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:12:59.529 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:59.529 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 144253 /var/tmp/spdk2.sock 00:12:59.529 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 144253 ']' 00:12:59.529 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:59.529 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:59.529 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:59.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:59.529 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:59.529 09:25:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:59.529 [2024-05-16 09:25:52.938877] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:12:59.529 [2024-05-16 09:25:52.938927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144253 ] 00:12:59.529 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.529 [2024-05-16 09:25:53.027060] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 143996 has claimed it. 00:12:59.529 [2024-05-16 09:25:53.027100] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:00.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (144253) - No such process 00:13:00.101 ERROR: process (pid: 144253) is no longer running 00:13:00.101 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:00.101 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:13:00.101 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:13:00.101 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:00.101 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:00.101 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:00.101 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 143996 00:13:00.101 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 143996 00:13:00.101 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:00.373 lslocks: write error 00:13:00.373 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 143996 00:13:00.373 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 143996 ']' 00:13:00.373 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 143996 00:13:00.373 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:13:00.373 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:00.373 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 143996 00:13:00.638 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:00.638 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:00.638 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 143996' 00:13:00.638 killing process with pid 143996 00:13:00.638 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 143996 00:13:00.638 09:25:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 143996 00:13:00.638 00:13:00.638 real 0m2.096s 00:13:00.638 user 0m2.349s 00:13:00.638 sys 0m0.545s 00:13:00.638 09:25:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:00.638 09:25:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:00.638 ************************************ 00:13:00.638 END TEST locking_app_on_locked_coremask 00:13:00.638 ************************************ 00:13:00.901 09:25:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:13:00.901 09:25:54 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:00.901 09:25:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:00.901 09:25:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:00.901 ************************************ 00:13:00.901 START TEST locking_overlapped_coremask 00:13:00.901 ************************************ 00:13:00.901 09:25:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:13:00.901 09:25:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=144457 00:13:00.901 09:25:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 144457 /var/tmp/spdk.sock 00:13:00.901 09:25:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:13:00.901 09:25:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 144457 ']' 00:13:00.901 09:25:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.901 09:25:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:00.901 09:25:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.901 09:25:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:00.901 09:25:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:00.901 [2024-05-16 09:25:54.297509] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:00.901 [2024-05-16 09:25:54.297566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144457 ] 00:13:00.901 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.901 [2024-05-16 09:25:54.358059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:00.901 [2024-05-16 09:25:54.427350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.901 [2024-05-16 09:25:54.427462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.901 [2024-05-16 09:25:54.427465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.844 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:01.844 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:13:01.844 09:25:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=144704 00:13:01.844 09:25:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 144704 /var/tmp/spdk2.sock 00:13:01.844 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:13:01.844 09:25:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:13:01.845 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 144704 /var/tmp/spdk2.sock 00:13:01.845 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:13:01.845 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:01.845 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:13:01.845 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:01.845 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 144704 /var/tmp/spdk2.sock 00:13:01.845 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 144704 ']' 00:13:01.845 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:01.845 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:01.845 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:01.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:01.845 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:01.845 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:01.845 [2024-05-16 09:25:55.126995] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:01.845 [2024-05-16 09:25:55.127050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144704 ] 00:13:01.845 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.845 [2024-05-16 09:25:55.196541] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 144457 has claimed it. 00:13:01.845 [2024-05-16 09:25:55.196573] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:02.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (144704) - No such process 00:13:02.418 ERROR: process (pid: 144704) is no longer running 00:13:02.418 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:02.418 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:13:02.418 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:13:02.418 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:02.418 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:02.419 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:02.419 09:25:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:13:02.419 09:25:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:02.419 09:25:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:02.419 09:25:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:02.419 09:25:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 144457 00:13:02.419 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 144457 ']' 00:13:02.419 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 144457 00:13:02.419 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:13:02.419 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:02.419 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 144457 00:13:02.419 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:02.419 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:02.419 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 144457' 00:13:02.419 killing process with pid 144457 00:13:02.419 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 144457 00:13:02.419 09:25:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 144457 00:13:02.681 00:13:02.681 real 0m1.756s 00:13:02.681 user 0m4.981s 00:13:02.681 sys 0m0.362s 00:13:02.681 09:25:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:02.681 09:25:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:02.681 ************************************ 00:13:02.681 END TEST locking_overlapped_coremask 00:13:02.681 ************************************ 00:13:02.681 09:25:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:13:02.681 09:25:56 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:02.681 09:25:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:02.681 09:25:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:02.681 ************************************ 00:13:02.681 START TEST locking_overlapped_coremask_via_rpc 00:13:02.681 ************************************ 00:13:02.681 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:13:02.681 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=144915 00:13:02.681 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 144915 /var/tmp/spdk.sock 00:13:02.681 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:13:02.681 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 144915 ']' 00:13:02.681 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.681 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:02.681 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.681 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:02.681 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.681 [2024-05-16 09:25:56.133209] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:02.681 [2024-05-16 09:25:56.133263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144915 ] 00:13:02.681 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.681 [2024-05-16 09:25:56.195734] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:02.681 [2024-05-16 09:25:56.195773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:02.943 [2024-05-16 09:25:56.266820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.943 [2024-05-16 09:25:56.266937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.943 [2024-05-16 09:25:56.266940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.515 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:03.515 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:03.515 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=145078 00:13:03.515 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 145078 /var/tmp/spdk2.sock 00:13:03.515 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 145078 ']' 00:13:03.515 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:13:03.515 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:03.515 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:03.515 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:03.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:03.515 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:03.515 09:25:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.515 [2024-05-16 09:25:56.964191] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:03.515 [2024-05-16 09:25:56.964242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145078 ] 00:13:03.515 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.515 [2024-05-16 09:25:57.035462] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:03.515 [2024-05-16 09:25:57.035488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:03.776 [2024-05-16 09:25:57.141065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.776 [2024-05-16 09:25:57.144112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.776 [2024-05-16 09:25:57.144114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:04.350 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:04.350 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:04.350 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:13:04.350 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.350 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.350 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.350 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:04.350 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:04.350 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:04.350 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:13:04.350 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:04.350 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:13:04.350 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:04.350 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:04.350 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.350 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.351 [2024-05-16 09:25:57.732114] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 144915 has claimed it. 00:13:04.351 request: 00:13:04.351 { 00:13:04.351 "method": "framework_enable_cpumask_locks", 00:13:04.351 "req_id": 1 00:13:04.351 } 00:13:04.351 Got JSON-RPC error response 00:13:04.351 response: 00:13:04.351 { 00:13:04.351 "code": -32603, 00:13:04.351 "message": "Failed to claim CPU core: 2" 00:13:04.351 } 00:13:04.351 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:04.351 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:04.351 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:04.351 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:04.351 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:04.351 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 144915 /var/tmp/spdk.sock 00:13:04.351 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 144915 ']' 00:13:04.351 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.351 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:04.351 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.351 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:04.351 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.612 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:04.612 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:04.612 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 145078 /var/tmp/spdk2.sock 00:13:04.612 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 145078 ']' 00:13:04.612 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:04.612 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:04.612 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:04.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:04.612 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:04.612 09:25:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.612 09:25:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:04.612 09:25:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:04.612 09:25:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:13:04.612 09:25:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:04.612 09:25:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:04.612 09:25:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:04.612 00:13:04.612 real 0m2.003s 00:13:04.612 user 0m0.788s 00:13:04.612 sys 0m0.145s 00:13:04.612 09:25:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:04.612 09:25:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.612 ************************************ 00:13:04.612 END TEST locking_overlapped_coremask_via_rpc 00:13:04.612 ************************************ 00:13:04.612 09:25:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:13:04.612 09:25:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 144915 ]] 00:13:04.612 09:25:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 144915 00:13:04.612 09:25:58 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 144915 ']' 00:13:04.612 09:25:58 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 144915 00:13:04.612 09:25:58 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:13:04.613 09:25:58 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:04.613 09:25:58 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 144915 00:13:04.874 09:25:58 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:04.874 09:25:58 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:04.874 09:25:58 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 144915' 00:13:04.874 killing process with pid 144915 00:13:04.874 09:25:58 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 144915 00:13:04.874 09:25:58 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 144915 00:13:04.874 09:25:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 145078 ]] 00:13:04.874 09:25:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 145078 00:13:04.874 09:25:58 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 145078 ']' 00:13:04.874 09:25:58 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 145078 00:13:04.874 09:25:58 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:13:04.874 09:25:58 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:04.874 09:25:58 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 145078 00:13:05.136 09:25:58 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:05.136 09:25:58 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:05.136 09:25:58 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 145078' 00:13:05.136 killing process with pid 145078 00:13:05.136 09:25:58 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 145078 00:13:05.136 09:25:58 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 145078 00:13:05.136 09:25:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:05.136 09:25:58 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:13:05.136 09:25:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 144915 ]] 00:13:05.136 09:25:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 144915 00:13:05.136 09:25:58 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 144915 ']' 00:13:05.136 09:25:58 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 144915 00:13:05.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (144915) - No such process 00:13:05.136 09:25:58 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 144915 is not found' 00:13:05.136 Process with pid 144915 is not found 00:13:05.136 09:25:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 145078 ]] 00:13:05.136 09:25:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 145078 00:13:05.136 09:25:58 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 145078 ']' 00:13:05.136 09:25:58 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 145078 00:13:05.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (145078) - No such process 00:13:05.136 09:25:58 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 145078 is not found' 00:13:05.136 Process with pid 145078 is not found 00:13:05.136 09:25:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:05.136 00:13:05.136 real 0m15.728s 00:13:05.136 user 0m27.102s 00:13:05.136 sys 0m4.588s 00:13:05.136 09:25:58 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:05.136 09:25:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:05.136 ************************************ 00:13:05.136 END TEST cpu_locks 00:13:05.136 ************************************ 00:13:05.136 00:13:05.136 real 0m41.550s 00:13:05.136 user 1m21.483s 00:13:05.136 sys 0m7.769s 00:13:05.136 09:25:58 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:05.136 09:25:58 event -- common/autotest_common.sh@10 -- # set +x 00:13:05.136 ************************************ 00:13:05.136 END TEST event 00:13:05.136 ************************************ 00:13:05.397 09:25:58 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:13:05.397 09:25:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:05.397 09:25:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:05.397 09:25:58 -- common/autotest_common.sh@10 -- # set +x 00:13:05.397 ************************************ 00:13:05.397 START TEST thread 00:13:05.397 ************************************ 00:13:05.397 09:25:58 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:13:05.397 * Looking for test storage... 00:13:05.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:13:05.397 09:25:58 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:05.397 09:25:58 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:13:05.397 09:25:58 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:05.397 09:25:58 thread -- common/autotest_common.sh@10 -- # set +x 00:13:05.397 ************************************ 00:13:05.397 START TEST thread_poller_perf 00:13:05.397 ************************************ 00:13:05.397 09:25:58 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:05.397 [2024-05-16 09:25:58.907322] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:05.397 [2024-05-16 09:25:58.907442] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145519 ] 00:13:05.397 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.658 [2024-05-16 09:25:58.982943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.658 [2024-05-16 09:25:59.057823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.659 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:13:06.603 ====================================== 00:13:06.603 busy:2414114864 (cyc) 00:13:06.603 total_run_count: 288000 00:13:06.603 tsc_hz: 2400000000 (cyc) 00:13:06.603 ====================================== 00:13:06.603 poller_cost: 8382 (cyc), 3492 (nsec) 00:13:06.603 00:13:06.603 real 0m1.238s 00:13:06.603 user 0m1.156s 00:13:06.603 sys 0m0.077s 00:13:06.603 09:26:00 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:06.603 09:26:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:06.603 ************************************ 00:13:06.603 END TEST thread_poller_perf 00:13:06.603 ************************************ 00:13:06.603 09:26:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:06.603 09:26:00 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:13:06.603 09:26:00 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:06.603 09:26:00 thread -- common/autotest_common.sh@10 -- # set +x 00:13:06.864 ************************************ 00:13:06.864 START TEST thread_poller_perf 00:13:06.864 ************************************ 00:13:06.864 09:26:00 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:06.864 [2024-05-16 09:26:00.223835] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:06.864 [2024-05-16 09:26:00.223933] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145867 ] 00:13:06.864 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.864 [2024-05-16 09:26:00.288463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.864 [2024-05-16 09:26:00.357356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.864 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:13:08.250 ====================================== 00:13:08.250 busy:2401871664 (cyc) 00:13:08.250 total_run_count: 3809000 00:13:08.250 tsc_hz: 2400000000 (cyc) 00:13:08.250 ====================================== 00:13:08.250 poller_cost: 630 (cyc), 262 (nsec) 00:13:08.250 00:13:08.250 real 0m1.209s 00:13:08.250 user 0m1.128s 00:13:08.250 sys 0m0.076s 00:13:08.250 09:26:01 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:08.250 09:26:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:08.250 ************************************ 00:13:08.250 END TEST thread_poller_perf 00:13:08.250 ************************************ 00:13:08.250 09:26:01 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:13:08.250 00:13:08.250 real 0m2.699s 00:13:08.250 user 0m2.391s 00:13:08.250 sys 0m0.307s 00:13:08.250 09:26:01 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:08.250 09:26:01 thread -- common/autotest_common.sh@10 -- # set +x 00:13:08.250 ************************************ 00:13:08.250 END TEST thread 00:13:08.250 ************************************ 00:13:08.250 09:26:01 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:13:08.250 09:26:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:08.250 09:26:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:08.250 09:26:01 -- common/autotest_common.sh@10 -- # set +x 00:13:08.250 ************************************ 00:13:08.250 START TEST accel 00:13:08.250 ************************************ 00:13:08.250 09:26:01 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:13:08.250 * Looking for test storage... 00:13:08.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:13:08.250 09:26:01 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:13:08.250 09:26:01 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:13:08.250 09:26:01 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:08.250 09:26:01 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=146256 00:13:08.250 09:26:01 accel -- accel/accel.sh@63 -- # waitforlisten 146256 00:13:08.250 09:26:01 accel -- common/autotest_common.sh@827 -- # '[' -z 146256 ']' 00:13:08.250 09:26:01 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.251 09:26:01 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:08.251 09:26:01 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.251 09:26:01 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:13:08.251 09:26:01 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:08.251 09:26:01 accel -- accel/accel.sh@61 -- # build_accel_config 00:13:08.251 09:26:01 accel -- common/autotest_common.sh@10 -- # set +x 00:13:08.251 09:26:01 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:08.251 09:26:01 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:08.251 09:26:01 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:08.251 09:26:01 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:08.251 09:26:01 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:08.251 09:26:01 accel -- accel/accel.sh@40 -- # local IFS=, 00:13:08.251 09:26:01 accel -- accel/accel.sh@41 -- # jq -r . 00:13:08.251 [2024-05-16 09:26:01.682932] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:08.251 [2024-05-16 09:26:01.683005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146256 ] 00:13:08.251 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.251 [2024-05-16 09:26:01.747999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.512 [2024-05-16 09:26:01.821262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.085 09:26:02 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:09.085 09:26:02 accel -- common/autotest_common.sh@860 -- # return 0 00:13:09.085 09:26:02 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:13:09.085 09:26:02 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:13:09.085 09:26:02 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:13:09.085 09:26:02 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:13:09.085 09:26:02 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:13:09.085 09:26:02 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:13:09.085 09:26:02 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.085 09:26:02 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:13:09.086 09:26:02 accel -- common/autotest_common.sh@10 -- # set +x 00:13:09.086 09:26:02 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.086 09:26:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # IFS== 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:09.086 09:26:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:09.086 09:26:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # IFS== 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:09.086 09:26:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:09.086 09:26:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # IFS== 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:09.086 09:26:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:09.086 09:26:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # IFS== 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:09.086 09:26:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:09.086 09:26:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # IFS== 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:09.086 09:26:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:09.086 09:26:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # IFS== 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:09.086 09:26:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:09.086 09:26:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # IFS== 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:09.086 09:26:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:09.086 09:26:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # IFS== 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:09.086 09:26:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:09.086 09:26:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # IFS== 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:09.086 09:26:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:09.086 09:26:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # IFS== 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:09.086 09:26:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:09.086 09:26:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # IFS== 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:09.086 09:26:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:09.086 09:26:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # IFS== 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:09.086 09:26:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:09.086 09:26:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # IFS== 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:09.086 09:26:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:09.086 09:26:02 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # IFS== 00:13:09.086 09:26:02 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:09.086 09:26:02 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:09.086 09:26:02 accel -- accel/accel.sh@75 -- # killprocess 146256 00:13:09.086 09:26:02 accel -- common/autotest_common.sh@946 -- # '[' -z 146256 ']' 00:13:09.086 09:26:02 accel -- common/autotest_common.sh@950 -- # kill -0 146256 00:13:09.086 09:26:02 accel -- common/autotest_common.sh@951 -- # uname 00:13:09.086 09:26:02 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:09.086 09:26:02 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 146256 00:13:09.086 09:26:02 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:09.086 09:26:02 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:09.086 09:26:02 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 146256' 00:13:09.086 killing process with pid 146256 00:13:09.086 09:26:02 accel -- common/autotest_common.sh@965 -- # kill 146256 00:13:09.086 09:26:02 accel -- common/autotest_common.sh@970 -- # wait 146256 00:13:09.357 09:26:02 accel -- accel/accel.sh@76 -- # trap - ERR 00:13:09.358 09:26:02 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:13:09.358 09:26:02 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:09.358 09:26:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:09.358 09:26:02 accel -- common/autotest_common.sh@10 -- # set +x 00:13:09.358 09:26:02 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:13:09.358 09:26:02 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:13:09.358 09:26:02 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:13:09.358 09:26:02 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:09.358 09:26:02 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:09.358 09:26:02 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:09.358 09:26:02 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:09.358 09:26:02 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:09.358 09:26:02 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:13:09.358 09:26:02 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:13:09.358 09:26:02 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:09.358 09:26:02 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:13:09.358 09:26:02 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:13:09.358 09:26:02 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:09.358 09:26:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:09.358 09:26:02 accel -- common/autotest_common.sh@10 -- # set +x 00:13:09.621 ************************************ 00:13:09.621 START TEST accel_missing_filename 00:13:09.621 ************************************ 00:13:09.621 09:26:02 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:13:09.621 09:26:02 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:13:09.621 09:26:02 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:13:09.621 09:26:02 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:13:09.621 09:26:02 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:09.621 09:26:02 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:13:09.621 09:26:02 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:09.621 09:26:02 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:13:09.621 09:26:02 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:13:09.621 09:26:02 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:13:09.621 09:26:02 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:09.621 09:26:02 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:09.621 09:26:02 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:09.621 09:26:02 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:09.621 09:26:02 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:09.621 09:26:02 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:13:09.621 09:26:02 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:13:09.621 [2024-05-16 09:26:02.970845] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:09.621 [2024-05-16 09:26:02.970950] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146562 ] 00:13:09.621 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.621 [2024-05-16 09:26:03.036792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.621 [2024-05-16 09:26:03.111517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.621 [2024-05-16 09:26:03.143749] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:09.882 [2024-05-16 09:26:03.180896] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:13:09.882 A filename is required. 00:13:09.882 09:26:03 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:13:09.882 09:26:03 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:09.882 09:26:03 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:13:09.882 09:26:03 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:13:09.883 09:26:03 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:13:09.883 09:26:03 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:09.883 00:13:09.883 real 0m0.295s 00:13:09.883 user 0m0.227s 00:13:09.883 sys 0m0.111s 00:13:09.883 09:26:03 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:09.883 09:26:03 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:13:09.883 ************************************ 00:13:09.883 END TEST accel_missing_filename 00:13:09.883 ************************************ 00:13:09.883 09:26:03 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:13:09.883 09:26:03 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:13:09.883 09:26:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:09.883 09:26:03 accel -- common/autotest_common.sh@10 -- # set +x 00:13:09.883 ************************************ 00:13:09.883 START TEST accel_compress_verify 00:13:09.883 ************************************ 00:13:09.883 09:26:03 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:13:09.883 09:26:03 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:13:09.883 09:26:03 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:13:09.883 09:26:03 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:13:09.883 09:26:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:09.883 09:26:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:13:09.883 09:26:03 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:09.883 09:26:03 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:13:09.883 09:26:03 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:13:09.883 09:26:03 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:13:09.883 09:26:03 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:09.883 09:26:03 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:09.883 09:26:03 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:09.883 09:26:03 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:09.883 09:26:03 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:09.883 09:26:03 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:13:09.883 09:26:03 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:13:09.883 [2024-05-16 09:26:03.341685] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:09.883 [2024-05-16 09:26:03.341779] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146654 ] 00:13:09.883 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.883 [2024-05-16 09:26:03.403065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.145 [2024-05-16 09:26:03.472123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.145 [2024-05-16 09:26:03.503881] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:10.145 [2024-05-16 09:26:03.540778] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:13:10.145 00:13:10.145 Compression does not support the verify option, aborting. 00:13:10.145 09:26:03 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:13:10.145 09:26:03 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:10.145 09:26:03 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:13:10.145 09:26:03 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:13:10.145 09:26:03 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:13:10.145 09:26:03 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:10.145 00:13:10.145 real 0m0.283s 00:13:10.145 user 0m0.228s 00:13:10.145 sys 0m0.096s 00:13:10.145 09:26:03 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:10.145 09:26:03 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:13:10.145 ************************************ 00:13:10.145 END TEST accel_compress_verify 00:13:10.145 ************************************ 00:13:10.145 09:26:03 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:13:10.145 09:26:03 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:10.145 09:26:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:10.145 09:26:03 accel -- common/autotest_common.sh@10 -- # set +x 00:13:10.145 ************************************ 00:13:10.145 START TEST accel_wrong_workload 00:13:10.145 ************************************ 00:13:10.145 09:26:03 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:13:10.145 09:26:03 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:13:10.145 09:26:03 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:13:10.145 09:26:03 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:13:10.145 09:26:03 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.145 09:26:03 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:13:10.145 09:26:03 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.145 09:26:03 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:13:10.145 09:26:03 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:13:10.145 09:26:03 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:13:10.145 09:26:03 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:10.145 09:26:03 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:10.145 09:26:03 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:10.145 09:26:03 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:10.145 09:26:03 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:10.145 09:26:03 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:13:10.145 09:26:03 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:13:10.145 Unsupported workload type: foobar 00:13:10.145 [2024-05-16 09:26:03.703717] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:13:10.406 accel_perf options: 00:13:10.406 [-h help message] 00:13:10.406 [-q queue depth per core] 00:13:10.406 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:13:10.406 [-T number of threads per core 00:13:10.406 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:13:10.406 [-t time in seconds] 00:13:10.406 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:13:10.406 [ dif_verify, , dif_generate, dif_generate_copy 00:13:10.406 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:13:10.406 [-l for compress/decompress workloads, name of uncompressed input file 00:13:10.406 [-S for crc32c workload, use this seed value (default 0) 00:13:10.406 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:13:10.406 [-f for fill workload, use this BYTE value (default 255) 00:13:10.406 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:13:10.406 [-y verify result if this switch is on] 00:13:10.406 [-a tasks to allocate per core (default: same value as -q)] 00:13:10.406 Can be used to spread operations across a wider range of memory. 00:13:10.406 09:26:03 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:13:10.406 09:26:03 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:10.406 09:26:03 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:10.406 09:26:03 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:10.406 00:13:10.406 real 0m0.037s 00:13:10.406 user 0m0.019s 00:13:10.406 sys 0m0.017s 00:13:10.406 09:26:03 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:10.406 09:26:03 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:13:10.406 ************************************ 00:13:10.406 END TEST accel_wrong_workload 00:13:10.406 ************************************ 00:13:10.406 Error: writing output failed: Broken pipe 00:13:10.406 09:26:03 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:13:10.406 09:26:03 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:13:10.406 09:26:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:10.406 09:26:03 accel -- common/autotest_common.sh@10 -- # set +x 00:13:10.406 ************************************ 00:13:10.406 START TEST accel_negative_buffers 00:13:10.406 ************************************ 00:13:10.406 09:26:03 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:13:10.406 09:26:03 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:13:10.407 09:26:03 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:13:10.407 09:26:03 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:13:10.407 09:26:03 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.407 09:26:03 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:13:10.407 09:26:03 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.407 09:26:03 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:13:10.407 09:26:03 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:13:10.407 09:26:03 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:13:10.407 09:26:03 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:10.407 09:26:03 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:10.407 09:26:03 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:10.407 09:26:03 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:10.407 09:26:03 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:10.407 09:26:03 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:13:10.407 09:26:03 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:13:10.407 -x option must be non-negative. 00:13:10.407 [2024-05-16 09:26:03.818469] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:13:10.407 accel_perf options: 00:13:10.407 [-h help message] 00:13:10.407 [-q queue depth per core] 00:13:10.407 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:13:10.407 [-T number of threads per core 00:13:10.407 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:13:10.407 [-t time in seconds] 00:13:10.407 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:13:10.407 [ dif_verify, , dif_generate, dif_generate_copy 00:13:10.407 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:13:10.407 [-l for compress/decompress workloads, name of uncompressed input file 00:13:10.407 [-S for crc32c workload, use this seed value (default 0) 00:13:10.407 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:13:10.407 [-f for fill workload, use this BYTE value (default 255) 00:13:10.407 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:13:10.407 [-y verify result if this switch is on] 00:13:10.407 [-a tasks to allocate per core (default: same value as -q)] 00:13:10.407 Can be used to spread operations across a wider range of memory. 00:13:10.407 09:26:03 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:13:10.407 09:26:03 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:10.407 09:26:03 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:10.407 09:26:03 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:10.407 00:13:10.407 real 0m0.035s 00:13:10.407 user 0m0.027s 00:13:10.407 sys 0m0.008s 00:13:10.407 09:26:03 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:10.407 09:26:03 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:13:10.407 ************************************ 00:13:10.407 END TEST accel_negative_buffers 00:13:10.407 ************************************ 00:13:10.407 Error: writing output failed: Broken pipe 00:13:10.407 09:26:03 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:13:10.407 09:26:03 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:13:10.407 09:26:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:10.407 09:26:03 accel -- common/autotest_common.sh@10 -- # set +x 00:13:10.407 ************************************ 00:13:10.407 START TEST accel_crc32c 00:13:10.407 ************************************ 00:13:10.407 09:26:03 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:13:10.407 09:26:03 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:13:10.407 09:26:03 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:13:10.407 09:26:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:10.407 09:26:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:10.407 09:26:03 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:13:10.407 09:26:03 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:13:10.407 09:26:03 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:13:10.407 09:26:03 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:10.407 09:26:03 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:10.407 09:26:03 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:10.407 09:26:03 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:10.407 09:26:03 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:10.407 09:26:03 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:13:10.407 09:26:03 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:13:10.407 [2024-05-16 09:26:03.933022] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:10.407 [2024-05-16 09:26:03.933109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146713 ] 00:13:10.407 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.668 [2024-05-16 09:26:03.994078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.668 [2024-05-16 09:26:04.058268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:10.668 09:26:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:13:12.056 09:26:05 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:12.056 00:13:12.056 real 0m1.282s 00:13:12.056 user 0m1.191s 00:13:12.056 sys 0m0.102s 00:13:12.056 09:26:05 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:12.056 09:26:05 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:13:12.056 ************************************ 00:13:12.056 END TEST accel_crc32c 00:13:12.056 ************************************ 00:13:12.056 09:26:05 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:13:12.056 09:26:05 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:13:12.056 09:26:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:12.056 09:26:05 accel -- common/autotest_common.sh@10 -- # set +x 00:13:12.056 ************************************ 00:13:12.056 START TEST accel_crc32c_C2 00:13:12.056 ************************************ 00:13:12.056 09:26:05 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:13:12.056 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:13:12.056 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:13:12.056 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:12.056 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:12.056 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:13:12.056 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:13:12.056 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:13:12.056 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:12.056 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:12.056 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:12.056 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:12.056 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:12.056 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:13:12.056 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:13:12.056 [2024-05-16 09:26:05.298593] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:12.056 [2024-05-16 09:26:05.298691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147071 ] 00:13:12.056 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.056 [2024-05-16 09:26:05.361192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.056 [2024-05-16 09:26:05.430819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:12.057 09:26:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:13.001 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:13:13.263 09:26:06 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:13.263 00:13:13.263 real 0m1.290s 00:13:13.263 user 0m1.202s 00:13:13.263 sys 0m0.098s 00:13:13.263 09:26:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:13.263 09:26:06 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:13:13.263 ************************************ 00:13:13.263 END TEST accel_crc32c_C2 00:13:13.263 ************************************ 00:13:13.263 09:26:06 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:13:13.263 09:26:06 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:13.263 09:26:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:13.263 09:26:06 accel -- common/autotest_common.sh@10 -- # set +x 00:13:13.263 ************************************ 00:13:13.263 START TEST accel_copy 00:13:13.263 ************************************ 00:13:13.263 09:26:06 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:13:13.263 09:26:06 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:13:13.263 09:26:06 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:13:13.263 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:13.263 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:13.263 09:26:06 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:13:13.263 09:26:06 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:13:13.263 09:26:06 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:13:13.263 09:26:06 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:13.263 09:26:06 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:13.263 09:26:06 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:13.264 09:26:06 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:13.264 09:26:06 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:13.264 09:26:06 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:13:13.264 09:26:06 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:13:13.264 [2024-05-16 09:26:06.669961] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:13.264 [2024-05-16 09:26:06.670025] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147419 ] 00:13:13.264 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.264 [2024-05-16 09:26:06.731111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.264 [2024-05-16 09:26:06.797793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:13.525 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:13.526 09:26:06 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:13:14.470 09:26:07 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:14.470 00:13:14.470 real 0m1.285s 00:13:14.470 user 0m1.189s 00:13:14.470 sys 0m0.106s 00:13:14.470 09:26:07 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:14.470 09:26:07 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:13:14.470 ************************************ 00:13:14.470 END TEST accel_copy 00:13:14.470 ************************************ 00:13:14.470 09:26:07 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:14.470 09:26:07 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:14.470 09:26:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:14.470 09:26:07 accel -- common/autotest_common.sh@10 -- # set +x 00:13:14.470 ************************************ 00:13:14.470 START TEST accel_fill 00:13:14.470 ************************************ 00:13:14.470 09:26:08 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:14.470 09:26:08 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:13:14.470 09:26:08 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:13:14.470 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:14.470 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:14.470 09:26:08 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:14.470 09:26:08 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:14.470 09:26:08 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:13:14.470 09:26:08 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:14.470 09:26:08 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:14.470 09:26:08 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:14.470 09:26:08 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:14.470 09:26:08 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:14.470 09:26:08 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:13:14.470 09:26:08 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:13:14.733 [2024-05-16 09:26:08.036229] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:14.733 [2024-05-16 09:26:08.036300] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147725 ] 00:13:14.733 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.733 [2024-05-16 09:26:08.099861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.733 [2024-05-16 09:26:08.169297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:14.733 09:26:08 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:13:16.123 09:26:09 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:16.123 00:13:16.123 real 0m1.290s 00:13:16.123 user 0m1.203s 00:13:16.123 sys 0m0.098s 00:13:16.123 09:26:09 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:16.123 09:26:09 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:13:16.123 ************************************ 00:13:16.123 END TEST accel_fill 00:13:16.123 ************************************ 00:13:16.123 09:26:09 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:13:16.123 09:26:09 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:16.123 09:26:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:16.123 09:26:09 accel -- common/autotest_common.sh@10 -- # set +x 00:13:16.123 ************************************ 00:13:16.123 START TEST accel_copy_crc32c 00:13:16.123 ************************************ 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:13:16.123 [2024-05-16 09:26:09.407686] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:16.123 [2024-05-16 09:26:09.407759] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147929 ] 00:13:16.123 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.123 [2024-05-16 09:26:09.471179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.123 [2024-05-16 09:26:09.540083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:16.123 09:26:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:17.512 00:13:17.512 real 0m1.289s 00:13:17.512 user 0m1.196s 00:13:17.512 sys 0m0.103s 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:17.512 09:26:10 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:13:17.512 ************************************ 00:13:17.512 END TEST accel_copy_crc32c 00:13:17.512 ************************************ 00:13:17.512 09:26:10 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:13:17.512 09:26:10 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:13:17.512 09:26:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:17.512 09:26:10 accel -- common/autotest_common.sh@10 -- # set +x 00:13:17.512 ************************************ 00:13:17.512 START TEST accel_copy_crc32c_C2 00:13:17.512 ************************************ 00:13:17.512 09:26:10 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:13:17.512 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:13:17.512 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:13:17.512 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.512 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.512 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:13:17.512 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:13:17.512 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:13:17.512 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:17.512 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:17.512 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:17.512 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:17.512 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:17.512 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:13:17.512 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:13:17.512 [2024-05-16 09:26:10.780321] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:17.512 [2024-05-16 09:26:10.780389] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148163 ] 00:13:17.512 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.512 [2024-05-16 09:26:10.842803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.513 [2024-05-16 09:26:10.913645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.513 09:26:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:18.901 00:13:18.901 real 0m1.290s 00:13:18.901 user 0m1.195s 00:13:18.901 sys 0m0.106s 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:18.901 09:26:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:13:18.901 ************************************ 00:13:18.901 END TEST accel_copy_crc32c_C2 00:13:18.901 ************************************ 00:13:18.901 09:26:12 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:13:18.901 09:26:12 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:18.901 09:26:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:18.901 09:26:12 accel -- common/autotest_common.sh@10 -- # set +x 00:13:18.901 ************************************ 00:13:18.901 START TEST accel_dualcast 00:13:18.901 ************************************ 00:13:18.901 09:26:12 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:13:18.901 [2024-05-16 09:26:12.149747] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:18.901 [2024-05-16 09:26:12.149808] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148509 ] 00:13:18.901 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.901 [2024-05-16 09:26:12.209994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.901 [2024-05-16 09:26:12.275997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:18.901 09:26:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:13:18.902 09:26:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:18.902 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:18.902 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:18.902 09:26:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:13:18.902 09:26:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:18.902 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:18.902 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:18.902 09:26:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:18.902 09:26:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:18.902 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:18.902 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:18.902 09:26:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:18.902 09:26:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:18.902 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:18.902 09:26:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:19.842 09:26:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:19.842 09:26:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:19.842 09:26:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:19.842 09:26:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:19.842 09:26:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:19.842 09:26:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:19.842 09:26:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:19.842 09:26:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:19.842 09:26:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:19.842 09:26:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:19.842 09:26:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:20.103 09:26:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:20.103 09:26:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:20.103 09:26:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:20.103 09:26:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:20.103 09:26:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:20.103 09:26:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:20.103 09:26:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:20.103 09:26:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:20.103 09:26:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:20.103 09:26:13 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:20.103 09:26:13 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:20.103 09:26:13 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:20.103 09:26:13 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:20.103 09:26:13 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:20.103 09:26:13 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:13:20.103 09:26:13 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:20.103 00:13:20.103 real 0m1.282s 00:13:20.103 user 0m1.206s 00:13:20.103 sys 0m0.086s 00:13:20.103 09:26:13 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:20.103 09:26:13 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:13:20.103 ************************************ 00:13:20.103 END TEST accel_dualcast 00:13:20.103 ************************************ 00:13:20.103 09:26:13 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:13:20.103 09:26:13 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:20.103 09:26:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:20.103 09:26:13 accel -- common/autotest_common.sh@10 -- # set +x 00:13:20.103 ************************************ 00:13:20.103 START TEST accel_compare 00:13:20.103 ************************************ 00:13:20.103 09:26:13 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:13:20.103 09:26:13 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:13:20.103 09:26:13 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:13:20.103 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:20.103 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:20.103 09:26:13 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:13:20.103 09:26:13 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:13:20.103 09:26:13 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:13:20.103 09:26:13 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:20.103 09:26:13 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:20.103 09:26:13 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:20.103 09:26:13 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:20.103 09:26:13 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:20.103 09:26:13 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:13:20.103 09:26:13 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:13:20.103 [2024-05-16 09:26:13.513775] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:20.104 [2024-05-16 09:26:13.513839] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148864 ] 00:13:20.104 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.104 [2024-05-16 09:26:13.575773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.104 [2024-05-16 09:26:13.645235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.364 09:26:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:20.365 09:26:13 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:13:21.306 09:26:14 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:21.306 00:13:21.306 real 0m1.287s 00:13:21.306 user 0m1.197s 00:13:21.306 sys 0m0.100s 00:13:21.306 09:26:14 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:21.306 09:26:14 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:13:21.306 ************************************ 00:13:21.306 END TEST accel_compare 00:13:21.306 ************************************ 00:13:21.306 09:26:14 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:13:21.306 09:26:14 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:21.306 09:26:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:21.306 09:26:14 accel -- common/autotest_common.sh@10 -- # set +x 00:13:21.306 ************************************ 00:13:21.306 START TEST accel_xor 00:13:21.306 ************************************ 00:13:21.306 09:26:14 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:13:21.306 09:26:14 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:13:21.307 09:26:14 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:13:21.307 09:26:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:21.307 09:26:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:21.307 09:26:14 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:13:21.307 09:26:14 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:13:21.307 09:26:14 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:13:21.307 09:26:14 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:21.307 09:26:14 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:21.307 09:26:14 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:21.307 09:26:14 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:21.307 09:26:14 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:21.307 09:26:14 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:13:21.307 09:26:14 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:13:21.568 [2024-05-16 09:26:14.881521] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:21.568 [2024-05-16 09:26:14.881598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149213 ] 00:13:21.568 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.568 [2024-05-16 09:26:14.944704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.568 [2024-05-16 09:26:15.016225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:21.568 09:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:21.569 09:26:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:22.954 00:13:22.954 real 0m1.291s 00:13:22.954 user 0m1.202s 00:13:22.954 sys 0m0.100s 00:13:22.954 09:26:16 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:22.954 09:26:16 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:13:22.954 ************************************ 00:13:22.954 END TEST accel_xor 00:13:22.954 ************************************ 00:13:22.954 09:26:16 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:13:22.954 09:26:16 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:13:22.954 09:26:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:22.954 09:26:16 accel -- common/autotest_common.sh@10 -- # set +x 00:13:22.954 ************************************ 00:13:22.954 START TEST accel_xor 00:13:22.954 ************************************ 00:13:22.954 09:26:16 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:13:22.954 [2024-05-16 09:26:16.252587] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:22.954 [2024-05-16 09:26:16.252648] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149446 ] 00:13:22.954 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.954 [2024-05-16 09:26:16.314292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.954 [2024-05-16 09:26:16.382149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:13:22.954 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:22.955 09:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:13:24.342 09:26:17 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:24.342 00:13:24.342 real 0m1.286s 00:13:24.342 user 0m1.198s 00:13:24.342 sys 0m0.099s 00:13:24.342 09:26:17 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:24.342 09:26:17 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:13:24.342 ************************************ 00:13:24.342 END TEST accel_xor 00:13:24.342 ************************************ 00:13:24.342 09:26:17 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:13:24.342 09:26:17 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:13:24.342 09:26:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:24.342 09:26:17 accel -- common/autotest_common.sh@10 -- # set +x 00:13:24.342 ************************************ 00:13:24.342 START TEST accel_dif_verify 00:13:24.342 ************************************ 00:13:24.342 09:26:17 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:13:24.342 [2024-05-16 09:26:17.619498] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:24.342 [2024-05-16 09:26:17.619562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149642 ] 00:13:24.342 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.342 [2024-05-16 09:26:17.680129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.342 [2024-05-16 09:26:17.744463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:24.342 09:26:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:13:25.729 09:26:18 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:25.729 00:13:25.729 real 0m1.282s 00:13:25.729 user 0m1.191s 00:13:25.729 sys 0m0.103s 00:13:25.729 09:26:18 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:25.729 09:26:18 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:13:25.729 ************************************ 00:13:25.729 END TEST accel_dif_verify 00:13:25.729 ************************************ 00:13:25.729 09:26:18 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:13:25.729 09:26:18 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:13:25.729 09:26:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:25.729 09:26:18 accel -- common/autotest_common.sh@10 -- # set +x 00:13:25.729 ************************************ 00:13:25.729 START TEST accel_dif_generate 00:13:25.729 ************************************ 00:13:25.729 09:26:18 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:13:25.730 09:26:18 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:13:25.730 09:26:18 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:13:25.730 09:26:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:18 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:13:25.730 09:26:18 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:13:25.730 09:26:18 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:13:25.730 09:26:18 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:25.730 09:26:18 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:25.730 09:26:18 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:25.730 09:26:18 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:25.730 09:26:18 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:25.730 09:26:18 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:13:25.730 09:26:18 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:13:25.730 [2024-05-16 09:26:18.984451] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:25.730 [2024-05-16 09:26:18.984544] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149950 ] 00:13:25.730 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.730 [2024-05-16 09:26:19.046282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.730 [2024-05-16 09:26:19.113289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:25.730 09:26:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:13:27.115 09:26:20 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:27.115 00:13:27.115 real 0m1.287s 00:13:27.115 user 0m1.199s 00:13:27.115 sys 0m0.100s 00:13:27.115 09:26:20 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:27.115 09:26:20 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:13:27.115 ************************************ 00:13:27.115 END TEST accel_dif_generate 00:13:27.115 ************************************ 00:13:27.115 09:26:20 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:13:27.115 09:26:20 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:13:27.115 09:26:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:27.115 09:26:20 accel -- common/autotest_common.sh@10 -- # set +x 00:13:27.115 ************************************ 00:13:27.115 START TEST accel_dif_generate_copy 00:13:27.115 ************************************ 00:13:27.115 09:26:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:13:27.115 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:13:27.115 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:13:27.115 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:27.115 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:27.115 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:13:27.115 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:13:27.115 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:13:27.115 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:27.115 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:27.115 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:27.115 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:27.115 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:27.115 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:13:27.115 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:13:27.115 [2024-05-16 09:26:20.352514] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:27.115 [2024-05-16 09:26:20.352606] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150307 ] 00:13:27.115 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.116 [2024-05-16 09:26:20.416723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.116 [2024-05-16 09:26:20.488386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:27.116 09:26:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:28.060 00:13:28.060 real 0m1.294s 00:13:28.060 user 0m1.192s 00:13:28.060 sys 0m0.112s 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:28.060 09:26:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:13:28.060 ************************************ 00:13:28.060 END TEST accel_dif_generate_copy 00:13:28.060 ************************************ 00:13:28.321 09:26:21 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:13:28.321 09:26:21 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:28.321 09:26:21 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:13:28.321 09:26:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:28.321 09:26:21 accel -- common/autotest_common.sh@10 -- # set +x 00:13:28.321 ************************************ 00:13:28.321 START TEST accel_comp 00:13:28.321 ************************************ 00:13:28.321 09:26:21 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:28.321 09:26:21 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:13:28.321 09:26:21 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:13:28.321 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.321 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:28.321 09:26:21 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:28.321 09:26:21 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:28.321 09:26:21 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:13:28.321 09:26:21 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:28.321 09:26:21 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:28.321 09:26:21 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:28.321 09:26:21 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:28.321 09:26:21 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:28.321 09:26:21 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:13:28.321 09:26:21 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:13:28.322 [2024-05-16 09:26:21.726248] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:28.322 [2024-05-16 09:26:21.726337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150656 ] 00:13:28.322 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.322 [2024-05-16 09:26:21.789838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.322 [2024-05-16 09:26:21.861279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:28.583 09:26:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:29.526 09:26:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:13:29.527 09:26:22 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:29.527 00:13:29.527 real 0m1.295s 00:13:29.527 user 0m1.199s 00:13:29.527 sys 0m0.108s 00:13:29.527 09:26:22 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:29.527 09:26:22 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:13:29.527 ************************************ 00:13:29.527 END TEST accel_comp 00:13:29.527 ************************************ 00:13:29.527 09:26:23 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:13:29.527 09:26:23 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:13:29.527 09:26:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:29.527 09:26:23 accel -- common/autotest_common.sh@10 -- # set +x 00:13:29.527 ************************************ 00:13:29.527 START TEST accel_decomp 00:13:29.527 ************************************ 00:13:29.527 09:26:23 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:13:29.527 09:26:23 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:13:29.527 09:26:23 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:13:29.527 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.527 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:29.527 09:26:23 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:13:29.527 09:26:23 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:13:29.527 09:26:23 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:13:29.527 09:26:23 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:29.527 09:26:23 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:29.527 09:26:23 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:29.527 09:26:23 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:29.527 09:26:23 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:29.527 09:26:23 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:13:29.527 09:26:23 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:13:29.788 [2024-05-16 09:26:23.102543] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:29.788 [2024-05-16 09:26:23.102608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151005 ] 00:13:29.788 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.788 [2024-05-16 09:26:23.164978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.788 [2024-05-16 09:26:23.235128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:29.788 09:26:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:29.789 09:26:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.789 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.789 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:29.789 09:26:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:29.789 09:26:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:29.789 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:29.789 09:26:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:31.176 09:26:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:31.176 00:13:31.176 real 0m1.293s 00:13:31.176 user 0m1.206s 00:13:31.176 sys 0m0.099s 00:13:31.176 09:26:24 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:31.176 09:26:24 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:13:31.176 ************************************ 00:13:31.176 END TEST accel_decomp 00:13:31.176 ************************************ 00:13:31.176 09:26:24 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:13:31.176 09:26:24 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:13:31.176 09:26:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:31.176 09:26:24 accel -- common/autotest_common.sh@10 -- # set +x 00:13:31.176 ************************************ 00:13:31.176 START TEST accel_decmop_full 00:13:31.176 ************************************ 00:13:31.176 09:26:24 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:13:31.176 [2024-05-16 09:26:24.473773] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:31.176 [2024-05-16 09:26:24.473838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151226 ] 00:13:31.176 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.176 [2024-05-16 09:26:24.536725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.176 [2024-05-16 09:26:24.608393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:31.176 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.177 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:31.177 09:26:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:13:31.177 09:26:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:31.177 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.177 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:31.177 09:26:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:13:31.177 09:26:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:31.177 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.177 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:31.177 09:26:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:31.177 09:26:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:31.177 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.177 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:31.177 09:26:24 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:31.177 09:26:24 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:31.177 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:31.177 09:26:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:32.564 09:26:25 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:32.564 00:13:32.564 real 0m1.308s 00:13:32.564 user 0m1.216s 00:13:32.564 sys 0m0.105s 00:13:32.564 09:26:25 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:32.564 09:26:25 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:13:32.564 ************************************ 00:13:32.564 END TEST accel_decmop_full 00:13:32.564 ************************************ 00:13:32.564 09:26:25 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:32.564 09:26:25 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:13:32.564 09:26:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:32.564 09:26:25 accel -- common/autotest_common.sh@10 -- # set +x 00:13:32.564 ************************************ 00:13:32.564 START TEST accel_decomp_mcore 00:13:32.564 ************************************ 00:13:32.564 09:26:25 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:32.564 09:26:25 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:32.564 09:26:25 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:32.564 09:26:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.564 09:26:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:32.564 09:26:25 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:32.564 09:26:25 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:13:32.564 09:26:25 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:32.564 09:26:25 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:32.564 09:26:25 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:32.564 09:26:25 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:32.565 09:26:25 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:32.565 09:26:25 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:32.565 09:26:25 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:32.565 09:26:25 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:32.565 [2024-05-16 09:26:25.864280] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:32.565 [2024-05-16 09:26:25.864341] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151432 ] 00:13:32.565 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.565 [2024-05-16 09:26:25.926681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:32.565 [2024-05-16 09:26:25.996669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.565 [2024-05-16 09:26:25.996780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.565 [2024-05-16 09:26:25.996961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.565 [2024-05-16 09:26:25.996962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:32.565 09:26:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.952 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:33.953 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:33.953 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:33.953 09:26:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:33.953 00:13:33.953 real 0m1.299s 00:13:33.953 user 0m4.439s 00:13:33.953 sys 0m0.108s 00:13:33.953 09:26:27 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:33.953 09:26:27 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:33.953 ************************************ 00:13:33.953 END TEST accel_decomp_mcore 00:13:33.953 ************************************ 00:13:33.953 09:26:27 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:33.953 09:26:27 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:33.953 09:26:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:33.953 09:26:27 accel -- common/autotest_common.sh@10 -- # set +x 00:13:33.953 ************************************ 00:13:33.953 START TEST accel_decomp_full_mcore 00:13:33.953 ************************************ 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:33.953 [2024-05-16 09:26:27.245059] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:33.953 [2024-05-16 09:26:27.245121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151761 ] 00:13:33.953 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.953 [2024-05-16 09:26:27.305927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.953 [2024-05-16 09:26:27.374461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.953 [2024-05-16 09:26:27.374581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.953 [2024-05-16 09:26:27.374735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.953 [2024-05-16 09:26:27.374736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:33.953 09:26:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:35.338 00:13:35.338 real 0m1.313s 00:13:35.338 user 0m4.495s 00:13:35.338 sys 0m0.112s 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:35.338 09:26:28 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:35.338 ************************************ 00:13:35.338 END TEST accel_decomp_full_mcore 00:13:35.338 ************************************ 00:13:35.338 09:26:28 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:13:35.338 09:26:28 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:13:35.338 09:26:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:35.338 09:26:28 accel -- common/autotest_common.sh@10 -- # set +x 00:13:35.338 ************************************ 00:13:35.338 START TEST accel_decomp_mthread 00:13:35.338 ************************************ 00:13:35.338 09:26:28 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:13:35.338 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:35.338 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:35.338 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.338 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:35.338 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:13:35.338 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:13:35.338 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:35.338 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:35.338 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:35.338 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:35.338 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:35.338 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:35.338 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:35.339 [2024-05-16 09:26:28.639457] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:35.339 [2024-05-16 09:26:28.639515] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152120 ] 00:13:35.339 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.339 [2024-05-16 09:26:28.699656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.339 [2024-05-16 09:26:28.765207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:35.339 09:26:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:36.726 00:13:36.726 real 0m1.289s 00:13:36.726 user 0m1.203s 00:13:36.726 sys 0m0.098s 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:36.726 09:26:29 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:36.726 ************************************ 00:13:36.726 END TEST accel_decomp_mthread 00:13:36.726 ************************************ 00:13:36.727 09:26:29 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:13:36.727 09:26:29 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:36.727 09:26:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:36.727 09:26:29 accel -- common/autotest_common.sh@10 -- # set +x 00:13:36.727 ************************************ 00:13:36.727 START TEST accel_decomp_full_mthread 00:13:36.727 ************************************ 00:13:36.727 09:26:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:13:36.727 09:26:29 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:36.727 09:26:29 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:36.727 09:26:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.727 09:26:29 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:13:36.727 09:26:29 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:13:36.727 09:26:29 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:36.727 09:26:29 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:36.727 09:26:29 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:36.727 09:26:29 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:36.727 09:26:29 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:36.727 09:26:29 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:36.727 09:26:29 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:36.727 09:26:29 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:36.727 [2024-05-16 09:26:30.011090] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:36.727 [2024-05-16 09:26:30.011180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152468 ] 00:13:36.727 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.727 [2024-05-16 09:26:30.072686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.727 [2024-05-16 09:26:30.137947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:36.727 09:26:30 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:38.113 00:13:38.113 real 0m1.317s 00:13:38.113 user 0m1.212s 00:13:38.113 sys 0m0.116s 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:38.113 09:26:31 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:38.113 ************************************ 00:13:38.113 END TEST accel_decomp_full_mthread 00:13:38.113 ************************************ 00:13:38.113 09:26:31 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:13:38.113 09:26:31 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:38.113 09:26:31 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:38.113 09:26:31 accel -- accel/accel.sh@137 -- # build_accel_config 00:13:38.113 09:26:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:38.113 09:26:31 accel -- common/autotest_common.sh@10 -- # set +x 00:13:38.113 09:26:31 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:38.113 09:26:31 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:38.113 09:26:31 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:38.113 09:26:31 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:38.113 09:26:31 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:38.113 09:26:31 accel -- accel/accel.sh@40 -- # local IFS=, 00:13:38.113 09:26:31 accel -- accel/accel.sh@41 -- # jq -r . 00:13:38.113 ************************************ 00:13:38.113 START TEST accel_dif_functional_tests 00:13:38.114 ************************************ 00:13:38.114 09:26:31 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:38.114 [2024-05-16 09:26:31.435903] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:38.114 [2024-05-16 09:26:31.435954] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152792 ] 00:13:38.114 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.114 [2024-05-16 09:26:31.498673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:38.114 [2024-05-16 09:26:31.574797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.114 [2024-05-16 09:26:31.574913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.114 [2024-05-16 09:26:31.574915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.114 00:13:38.114 00:13:38.114 CUnit - A unit testing framework for C - Version 2.1-3 00:13:38.114 http://cunit.sourceforge.net/ 00:13:38.114 00:13:38.114 00:13:38.114 Suite: accel_dif 00:13:38.114 Test: verify: DIF generated, GUARD check ...passed 00:13:38.114 Test: verify: DIF generated, APPTAG check ...passed 00:13:38.114 Test: verify: DIF generated, REFTAG check ...passed 00:13:38.114 Test: verify: DIF not generated, GUARD check ...[2024-05-16 09:26:31.630908] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:38.114 [2024-05-16 09:26:31.630945] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:38.114 passed 00:13:38.114 Test: verify: DIF not generated, APPTAG check ...[2024-05-16 09:26:31.630982] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:38.114 [2024-05-16 09:26:31.630996] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:38.114 passed 00:13:38.114 Test: verify: DIF not generated, REFTAG check ...[2024-05-16 09:26:31.631014] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:38.114 [2024-05-16 09:26:31.631028] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:38.114 passed 00:13:38.114 Test: verify: APPTAG correct, APPTAG check ...passed 00:13:38.114 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-16 09:26:31.631079] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:38.114 passed 00:13:38.114 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:13:38.114 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:13:38.114 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:13:38.114 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-16 09:26:31.631199] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:38.114 passed 00:13:38.114 Test: generate copy: DIF generated, GUARD check ...passed 00:13:38.114 Test: generate copy: DIF generated, APTTAG check ...passed 00:13:38.114 Test: generate copy: DIF generated, REFTAG check ...passed 00:13:38.114 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:13:38.114 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:13:38.114 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:13:38.114 Test: generate copy: iovecs-len validate ...[2024-05-16 09:26:31.631390] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:13:38.114 passed 00:13:38.114 Test: generate copy: buffer alignment validate ...passed 00:13:38.114 00:13:38.114 Run Summary: Type Total Ran Passed Failed Inactive 00:13:38.114 suites 1 1 n/a 0 0 00:13:38.114 tests 20 20 20 0 0 00:13:38.114 asserts 204 204 204 0 n/a 00:13:38.114 00:13:38.114 Elapsed time = 0.002 seconds 00:13:38.375 00:13:38.375 real 0m0.364s 00:13:38.375 user 0m0.448s 00:13:38.375 sys 0m0.137s 00:13:38.375 09:26:31 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:38.375 09:26:31 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:13:38.375 ************************************ 00:13:38.375 END TEST accel_dif_functional_tests 00:13:38.375 ************************************ 00:13:38.375 00:13:38.375 real 0m30.259s 00:13:38.375 user 0m33.763s 00:13:38.375 sys 0m4.187s 00:13:38.375 09:26:31 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:38.375 09:26:31 accel -- common/autotest_common.sh@10 -- # set +x 00:13:38.375 ************************************ 00:13:38.375 END TEST accel 00:13:38.375 ************************************ 00:13:38.375 09:26:31 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:13:38.375 09:26:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:38.375 09:26:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:38.375 09:26:31 -- common/autotest_common.sh@10 -- # set +x 00:13:38.375 ************************************ 00:13:38.375 START TEST accel_rpc 00:13:38.375 ************************************ 00:13:38.376 09:26:31 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:13:38.637 * Looking for test storage... 00:13:38.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:13:38.637 09:26:31 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:38.637 09:26:31 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=152892 00:13:38.637 09:26:31 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 152892 00:13:38.637 09:26:31 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:38.637 09:26:31 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 152892 ']' 00:13:38.637 09:26:31 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.637 09:26:31 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:38.637 09:26:31 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.637 09:26:31 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:38.637 09:26:31 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.637 [2024-05-16 09:26:32.030583] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:38.637 [2024-05-16 09:26:32.030651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152892 ] 00:13:38.637 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.637 [2024-05-16 09:26:32.095331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.637 [2024-05-16 09:26:32.169180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.580 09:26:32 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:39.580 09:26:32 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:39.580 09:26:32 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:13:39.580 09:26:32 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:13:39.580 09:26:32 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:13:39.580 09:26:32 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:13:39.580 09:26:32 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:13:39.580 09:26:32 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:39.580 09:26:32 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:39.580 09:26:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.580 ************************************ 00:13:39.580 START TEST accel_assign_opcode 00:13:39.580 ************************************ 00:13:39.580 09:26:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:13:39.580 09:26:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:13:39.580 09:26:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.580 09:26:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:39.580 [2024-05-16 09:26:32.847225] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:13:39.580 09:26:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.580 09:26:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:13:39.580 09:26:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.580 09:26:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:39.580 [2024-05-16 09:26:32.859247] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:13:39.580 09:26:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.580 09:26:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:13:39.580 09:26:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.580 09:26:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:39.580 09:26:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.580 09:26:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:39.580 09:26:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:39.580 09:26:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.580 09:26:33 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:13:39.580 09:26:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:39.580 09:26:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.580 software 00:13:39.580 00:13:39.580 real 0m0.216s 00:13:39.580 user 0m0.052s 00:13:39.580 sys 0m0.008s 00:13:39.580 09:26:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:39.580 09:26:33 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:39.580 ************************************ 00:13:39.580 END TEST accel_assign_opcode 00:13:39.580 ************************************ 00:13:39.580 09:26:33 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 152892 00:13:39.580 09:26:33 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 152892 ']' 00:13:39.580 09:26:33 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 152892 00:13:39.580 09:26:33 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:13:39.580 09:26:33 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:39.580 09:26:33 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 152892 00:13:39.841 09:26:33 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:39.841 09:26:33 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:39.841 09:26:33 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 152892' 00:13:39.841 killing process with pid 152892 00:13:39.841 09:26:33 accel_rpc -- common/autotest_common.sh@965 -- # kill 152892 00:13:39.841 09:26:33 accel_rpc -- common/autotest_common.sh@970 -- # wait 152892 00:13:39.841 00:13:39.841 real 0m1.484s 00:13:39.841 user 0m1.586s 00:13:39.841 sys 0m0.397s 00:13:39.841 09:26:33 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:39.841 09:26:33 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.841 ************************************ 00:13:39.841 END TEST accel_rpc 00:13:39.841 ************************************ 00:13:39.841 09:26:33 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:13:39.841 09:26:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:39.841 09:26:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:39.841 09:26:33 -- common/autotest_common.sh@10 -- # set +x 00:13:40.103 ************************************ 00:13:40.103 START TEST app_cmdline 00:13:40.103 ************************************ 00:13:40.103 09:26:33 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:13:40.103 * Looking for test storage... 00:13:40.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:40.103 09:26:33 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:40.103 09:26:33 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=153298 00:13:40.103 09:26:33 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 153298 00:13:40.103 09:26:33 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 153298 ']' 00:13:40.103 09:26:33 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.103 09:26:33 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:40.103 09:26:33 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.103 09:26:33 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:40.103 09:26:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:40.103 09:26:33 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:40.103 [2024-05-16 09:26:33.597890] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:13:40.103 [2024-05-16 09:26:33.597946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153298 ] 00:13:40.103 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.103 [2024-05-16 09:26:33.658413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.364 [2024-05-16 09:26:33.728168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.937 09:26:34 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:40.937 09:26:34 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:13:40.937 09:26:34 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:13:40.937 { 00:13:40.937 "version": "SPDK v24.05-pre git sha1 cc94f3031", 00:13:40.937 "fields": { 00:13:40.937 "major": 24, 00:13:40.937 "minor": 5, 00:13:40.937 "patch": 0, 00:13:40.937 "suffix": "-pre", 00:13:40.937 "commit": "cc94f3031" 00:13:40.937 } 00:13:40.937 } 00:13:40.937 09:26:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:13:40.937 09:26:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:40.937 09:26:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:40.937 09:26:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:40.937 09:26:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:40.937 09:26:34 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.937 09:26:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:41.198 09:26:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:41.198 09:26:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.198 09:26:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:41.198 09:26:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:41.198 09:26:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:41.198 request: 00:13:41.198 { 00:13:41.198 "method": "env_dpdk_get_mem_stats", 00:13:41.198 "req_id": 1 00:13:41.198 } 00:13:41.198 Got JSON-RPC error response 00:13:41.198 response: 00:13:41.198 { 00:13:41.198 "code": -32601, 00:13:41.198 "message": "Method not found" 00:13:41.198 } 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:41.198 09:26:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 153298 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 153298 ']' 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 153298 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 153298 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 153298' 00:13:41.198 killing process with pid 153298 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@965 -- # kill 153298 00:13:41.198 09:26:34 app_cmdline -- common/autotest_common.sh@970 -- # wait 153298 00:13:41.459 00:13:41.459 real 0m1.520s 00:13:41.459 user 0m1.824s 00:13:41.459 sys 0m0.385s 00:13:41.459 09:26:34 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:41.459 09:26:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:41.459 ************************************ 00:13:41.459 END TEST app_cmdline 00:13:41.459 ************************************ 00:13:41.459 09:26:34 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:13:41.459 09:26:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:41.459 09:26:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:41.459 09:26:34 -- common/autotest_common.sh@10 -- # set +x 00:13:41.724 ************************************ 00:13:41.724 START TEST version 00:13:41.724 ************************************ 00:13:41.724 09:26:35 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:13:41.724 * Looking for test storage... 00:13:41.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:41.724 09:26:35 version -- app/version.sh@17 -- # get_header_version major 00:13:41.724 09:26:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:13:41.724 09:26:35 version -- app/version.sh@14 -- # cut -f2 00:13:41.724 09:26:35 version -- app/version.sh@14 -- # tr -d '"' 00:13:41.724 09:26:35 version -- app/version.sh@17 -- # major=24 00:13:41.724 09:26:35 version -- app/version.sh@18 -- # get_header_version minor 00:13:41.724 09:26:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:13:41.724 09:26:35 version -- app/version.sh@14 -- # cut -f2 00:13:41.724 09:26:35 version -- app/version.sh@14 -- # tr -d '"' 00:13:41.724 09:26:35 version -- app/version.sh@18 -- # minor=5 00:13:41.724 09:26:35 version -- app/version.sh@19 -- # get_header_version patch 00:13:41.724 09:26:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:13:41.724 09:26:35 version -- app/version.sh@14 -- # cut -f2 00:13:41.724 09:26:35 version -- app/version.sh@14 -- # tr -d '"' 00:13:41.724 09:26:35 version -- app/version.sh@19 -- # patch=0 00:13:41.724 09:26:35 version -- app/version.sh@20 -- # get_header_version suffix 00:13:41.724 09:26:35 version -- app/version.sh@14 -- # cut -f2 00:13:41.724 09:26:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:13:41.724 09:26:35 version -- app/version.sh@14 -- # tr -d '"' 00:13:41.724 09:26:35 version -- app/version.sh@20 -- # suffix=-pre 00:13:41.724 09:26:35 version -- app/version.sh@22 -- # version=24.5 00:13:41.724 09:26:35 version -- app/version.sh@25 -- # (( patch != 0 )) 00:13:41.724 09:26:35 version -- app/version.sh@28 -- # version=24.5rc0 00:13:41.724 09:26:35 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:41.724 09:26:35 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:41.724 09:26:35 version -- app/version.sh@30 -- # py_version=24.5rc0 00:13:41.724 09:26:35 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:13:41.724 00:13:41.724 real 0m0.172s 00:13:41.724 user 0m0.088s 00:13:41.724 sys 0m0.121s 00:13:41.724 09:26:35 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:41.724 09:26:35 version -- common/autotest_common.sh@10 -- # set +x 00:13:41.724 ************************************ 00:13:41.724 END TEST version 00:13:41.724 ************************************ 00:13:41.724 09:26:35 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:13:41.724 09:26:35 -- spdk/autotest.sh@194 -- # uname -s 00:13:41.724 09:26:35 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:13:41.724 09:26:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:13:41.724 09:26:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:13:41.724 09:26:35 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:13:41.724 09:26:35 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:13:41.724 09:26:35 -- spdk/autotest.sh@256 -- # timing_exit lib 00:13:41.724 09:26:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:41.724 09:26:35 -- common/autotest_common.sh@10 -- # set +x 00:13:41.986 09:26:35 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:13:41.986 09:26:35 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:13:41.986 09:26:35 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:13:41.986 09:26:35 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:13:41.986 09:26:35 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:13:41.986 09:26:35 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:13:41.986 09:26:35 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:41.986 09:26:35 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:41.986 09:26:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:41.986 09:26:35 -- common/autotest_common.sh@10 -- # set +x 00:13:41.986 ************************************ 00:13:41.986 START TEST nvmf_tcp 00:13:41.986 ************************************ 00:13:41.986 09:26:35 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:41.986 * Looking for test storage... 00:13:41.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.986 09:26:35 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.986 09:26:35 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.986 09:26:35 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.986 09:26:35 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.986 09:26:35 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.986 09:26:35 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.986 09:26:35 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:13:41.986 09:26:35 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:13:41.986 09:26:35 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:41.986 09:26:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:13:41.986 09:26:35 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:41.986 09:26:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:41.986 09:26:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:41.986 09:26:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:41.986 ************************************ 00:13:41.986 START TEST nvmf_example 00:13:41.986 ************************************ 00:13:41.986 09:26:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:42.249 * Looking for test storage... 00:13:42.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:42.249 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:42.250 09:26:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:13:42.250 09:26:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:48.848 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:48.848 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:48.848 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:48.848 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:48.848 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.849 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.849 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.849 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:48.849 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.849 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.849 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:48.849 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.849 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.849 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:48.849 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:48.849 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.849 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:49.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.790 ms 00:13:49.110 00:13:49.110 --- 10.0.0.2 ping statistics --- 00:13:49.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.110 rtt min/avg/max/mdev = 0.790/0.790/0.790/0.000 ms 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:49.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:13:49.110 00:13:49.110 --- 10.0.0.1 ping statistics --- 00:13:49.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.110 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:49.110 09:26:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:49.111 09:26:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:49.111 09:26:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:49.111 09:26:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:49.111 09:26:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=157400 00:13:49.111 09:26:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:49.111 09:26:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 157400 00:13:49.111 09:26:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:49.111 09:26:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 157400 ']' 00:13:49.111 09:26:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.111 09:26:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:49.111 09:26:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.111 09:26:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:49.111 09:26:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:49.372 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.945 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:49.945 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:13:49.945 09:26:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:49.945 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.945 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:50.207 09:26:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:50.207 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.440 Initializing NVMe Controllers 00:14:02.440 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:02.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:02.440 Initialization complete. Launching workers. 00:14:02.440 ======================================================== 00:14:02.440 Latency(us) 00:14:02.440 Device Information : IOPS MiB/s Average min max 00:14:02.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18982.94 74.15 3371.10 628.88 15796.42 00:14:02.440 ======================================================== 00:14:02.440 Total : 18982.94 74.15 3371.10 628.88 15796.42 00:14:02.440 00:14:02.440 09:26:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:14:02.440 09:26:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:14:02.440 09:26:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:02.440 09:26:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:14:02.440 09:26:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:02.440 09:26:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:14:02.440 09:26:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:02.440 09:26:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:02.440 rmmod nvme_tcp 00:14:02.440 rmmod nvme_fabrics 00:14:02.440 rmmod nvme_keyring 00:14:02.440 09:26:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:02.440 09:26:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:14:02.440 09:26:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:14:02.441 09:26:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 157400 ']' 00:14:02.441 09:26:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 157400 00:14:02.441 09:26:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 157400 ']' 00:14:02.441 09:26:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 157400 00:14:02.441 09:26:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:14:02.441 09:26:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:02.441 09:26:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 157400 00:14:02.441 09:26:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:14:02.441 09:26:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:14:02.441 09:26:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 157400' 00:14:02.441 killing process with pid 157400 00:14:02.441 09:26:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 157400 00:14:02.441 09:26:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 157400 00:14:02.441 nvmf threads initialize successfully 00:14:02.441 bdev subsystem init successfully 00:14:02.441 created a nvmf target service 00:14:02.441 create targets's poll groups done 00:14:02.441 all subsystems of target started 00:14:02.441 nvmf target is running 00:14:02.441 all subsystems of target stopped 00:14:02.441 destroy targets's poll groups done 00:14:02.441 destroyed the nvmf target service 00:14:02.441 bdev subsystem finish successfully 00:14:02.441 nvmf threads destroy successfully 00:14:02.441 09:26:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:02.441 09:26:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:02.441 09:26:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:02.441 09:26:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:02.441 09:26:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:02.441 09:26:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.441 09:26:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.441 09:26:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.702 09:26:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:02.702 09:26:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:14:02.702 09:26:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.702 09:26:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:02.702 00:14:02.702 real 0m20.647s 00:14:02.702 user 0m46.235s 00:14:02.702 sys 0m6.227s 00:14:02.702 09:26:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:02.702 09:26:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:02.702 ************************************ 00:14:02.702 END TEST nvmf_example 00:14:02.702 ************************************ 00:14:02.702 09:26:56 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:02.702 09:26:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:02.702 09:26:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:02.702 09:26:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:02.702 ************************************ 00:14:02.702 START TEST nvmf_filesystem 00:14:02.702 ************************************ 00:14:02.702 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:02.966 * Looking for test storage... 00:14:02.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:14:02.966 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:02.967 #define SPDK_CONFIG_H 00:14:02.967 #define SPDK_CONFIG_APPS 1 00:14:02.967 #define SPDK_CONFIG_ARCH native 00:14:02.967 #undef SPDK_CONFIG_ASAN 00:14:02.967 #undef SPDK_CONFIG_AVAHI 00:14:02.967 #undef SPDK_CONFIG_CET 00:14:02.967 #define SPDK_CONFIG_COVERAGE 1 00:14:02.967 #define SPDK_CONFIG_CROSS_PREFIX 00:14:02.967 #undef SPDK_CONFIG_CRYPTO 00:14:02.967 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:02.967 #undef SPDK_CONFIG_CUSTOMOCF 00:14:02.967 #undef SPDK_CONFIG_DAOS 00:14:02.967 #define SPDK_CONFIG_DAOS_DIR 00:14:02.967 #define SPDK_CONFIG_DEBUG 1 00:14:02.967 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:02.967 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:02.967 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:02.967 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:02.967 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:02.967 #undef SPDK_CONFIG_DPDK_UADK 00:14:02.967 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:02.967 #define SPDK_CONFIG_EXAMPLES 1 00:14:02.967 #undef SPDK_CONFIG_FC 00:14:02.967 #define SPDK_CONFIG_FC_PATH 00:14:02.967 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:02.967 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:02.967 #undef SPDK_CONFIG_FUSE 00:14:02.967 #undef SPDK_CONFIG_FUZZER 00:14:02.967 #define SPDK_CONFIG_FUZZER_LIB 00:14:02.967 #undef SPDK_CONFIG_GOLANG 00:14:02.967 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:02.967 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:02.967 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:02.967 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:14:02.967 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:02.967 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:02.967 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:02.967 #define SPDK_CONFIG_IDXD 1 00:14:02.967 #undef SPDK_CONFIG_IDXD_KERNEL 00:14:02.967 #undef SPDK_CONFIG_IPSEC_MB 00:14:02.967 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:02.967 #define SPDK_CONFIG_ISAL 1 00:14:02.967 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:02.967 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:02.967 #define SPDK_CONFIG_LIBDIR 00:14:02.967 #undef SPDK_CONFIG_LTO 00:14:02.967 #define SPDK_CONFIG_MAX_LCORES 00:14:02.967 #define SPDK_CONFIG_NVME_CUSE 1 00:14:02.967 #undef SPDK_CONFIG_OCF 00:14:02.967 #define SPDK_CONFIG_OCF_PATH 00:14:02.967 #define SPDK_CONFIG_OPENSSL_PATH 00:14:02.967 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:02.967 #define SPDK_CONFIG_PGO_DIR 00:14:02.967 #undef SPDK_CONFIG_PGO_USE 00:14:02.967 #define SPDK_CONFIG_PREFIX /usr/local 00:14:02.967 #undef SPDK_CONFIG_RAID5F 00:14:02.967 #undef SPDK_CONFIG_RBD 00:14:02.967 #define SPDK_CONFIG_RDMA 1 00:14:02.967 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:02.967 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:02.967 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:02.967 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:02.967 #define SPDK_CONFIG_SHARED 1 00:14:02.967 #undef SPDK_CONFIG_SMA 00:14:02.967 #define SPDK_CONFIG_TESTS 1 00:14:02.967 #undef SPDK_CONFIG_TSAN 00:14:02.967 #define SPDK_CONFIG_UBLK 1 00:14:02.967 #define SPDK_CONFIG_UBSAN 1 00:14:02.967 #undef SPDK_CONFIG_UNIT_TESTS 00:14:02.967 #undef SPDK_CONFIG_URING 00:14:02.967 #define SPDK_CONFIG_URING_PATH 00:14:02.967 #undef SPDK_CONFIG_URING_ZNS 00:14:02.967 #undef SPDK_CONFIG_USDT 00:14:02.967 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:02.967 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:02.967 #define SPDK_CONFIG_VFIO_USER 1 00:14:02.967 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:02.967 #define SPDK_CONFIG_VHOST 1 00:14:02.967 #define SPDK_CONFIG_VIRTIO 1 00:14:02.967 #undef SPDK_CONFIG_VTUNE 00:14:02.967 #define SPDK_CONFIG_VTUNE_DIR 00:14:02.967 #define SPDK_CONFIG_WERROR 1 00:14:02.967 #define SPDK_CONFIG_WPDK_DIR 00:14:02.967 #undef SPDK_CONFIG_XNVME 00:14:02.967 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.967 09:26:56 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:14:02.968 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j144 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 160325 ]] 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 160325 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:14:02.969 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.oix367 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.oix367/tests/target /tmp/spdk.oix367 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=971476992 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4312952832 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=124345704448 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=129370984448 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5025280000 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=64682115072 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=64685490176 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=25864507392 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=25874198528 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9691136 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=efivarfs 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=efivarfs 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=391168 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=507904 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=112640 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=64685187072 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=64685494272 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=307200 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12937093120 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12937097216 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:14:02.970 * Looking for test storage... 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=124345704448 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=7239872512 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.970 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.231 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.231 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:14:03.231 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:03.232 09:26:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:09.823 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:09.823 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:09.823 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:09.823 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.823 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:09.824 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.824 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.824 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:09.824 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:09.824 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.824 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:10.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:14:10.085 00:14:10.085 --- 10.0.0.2 ping statistics --- 00:14:10.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.085 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:14:10.085 00:14:10.085 --- 10.0.0.1 ping statistics --- 00:14:10.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.085 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:10.085 ************************************ 00:14:10.085 START TEST nvmf_filesystem_no_in_capsule 00:14:10.085 ************************************ 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=164235 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 164235 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 164235 ']' 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:10.085 09:27:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:10.353 [2024-05-16 09:27:03.690328] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:14:10.353 [2024-05-16 09:27:03.690390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.353 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.353 [2024-05-16 09:27:03.762109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:10.353 [2024-05-16 09:27:03.839286] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.353 [2024-05-16 09:27:03.839324] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.353 [2024-05-16 09:27:03.839332] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.353 [2024-05-16 09:27:03.839338] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.353 [2024-05-16 09:27:03.839344] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.353 [2024-05-16 09:27:03.839488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.353 [2024-05-16 09:27:03.839604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.354 [2024-05-16 09:27:03.839731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.354 [2024-05-16 09:27:03.839733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.927 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:10.927 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:14:10.927 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:10.927 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:10.927 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:11.188 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.188 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:11.188 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:11.188 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.188 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:11.188 [2024-05-16 09:27:04.515649] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.188 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.188 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:11.188 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.188 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:11.188 Malloc1 00:14:11.188 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.188 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:11.188 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.188 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:11.188 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.188 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:11.188 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.188 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:11.188 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.189 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.189 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.189 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:11.189 [2024-05-16 09:27:04.644056] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:11.189 [2024-05-16 09:27:04.644288] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.189 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.189 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:11.189 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:14:11.189 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:14:11.189 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:14:11.189 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:14:11.189 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:11.189 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.189 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:11.189 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.189 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:14:11.189 { 00:14:11.189 "name": "Malloc1", 00:14:11.189 "aliases": [ 00:14:11.189 "a2e18516-6dad-4bc4-bc11-ae9eb6aa639e" 00:14:11.189 ], 00:14:11.189 "product_name": "Malloc disk", 00:14:11.189 "block_size": 512, 00:14:11.189 "num_blocks": 1048576, 00:14:11.189 "uuid": "a2e18516-6dad-4bc4-bc11-ae9eb6aa639e", 00:14:11.189 "assigned_rate_limits": { 00:14:11.189 "rw_ios_per_sec": 0, 00:14:11.189 "rw_mbytes_per_sec": 0, 00:14:11.189 "r_mbytes_per_sec": 0, 00:14:11.189 "w_mbytes_per_sec": 0 00:14:11.189 }, 00:14:11.189 "claimed": true, 00:14:11.189 "claim_type": "exclusive_write", 00:14:11.189 "zoned": false, 00:14:11.189 "supported_io_types": { 00:14:11.189 "read": true, 00:14:11.189 "write": true, 00:14:11.189 "unmap": true, 00:14:11.189 "write_zeroes": true, 00:14:11.189 "flush": true, 00:14:11.189 "reset": true, 00:14:11.189 "compare": false, 00:14:11.189 "compare_and_write": false, 00:14:11.189 "abort": true, 00:14:11.189 "nvme_admin": false, 00:14:11.189 "nvme_io": false 00:14:11.189 }, 00:14:11.189 "memory_domains": [ 00:14:11.189 { 00:14:11.189 "dma_device_id": "system", 00:14:11.189 "dma_device_type": 1 00:14:11.189 }, 00:14:11.189 { 00:14:11.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.189 "dma_device_type": 2 00:14:11.189 } 00:14:11.189 ], 00:14:11.189 "driver_specific": {} 00:14:11.189 } 00:14:11.189 ]' 00:14:11.189 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:14:11.189 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:14:11.189 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:14:11.450 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:14:11.450 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:14:11.450 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:14:11.450 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:11.450 09:27:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:12.835 09:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:12.835 09:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:14:12.835 09:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.835 09:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:12.835 09:27:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:15.383 09:27:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:16.325 09:27:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:14:16.325 09:27:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:16.325 09:27:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:16.325 09:27:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:16.325 09:27:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:16.325 ************************************ 00:14:16.325 START TEST filesystem_ext4 00:14:16.325 ************************************ 00:14:16.325 09:27:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:16.325 09:27:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:16.325 09:27:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:16.325 09:27:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:16.325 09:27:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:14:16.325 09:27:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:14:16.325 09:27:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:14:16.325 09:27:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:14:16.325 09:27:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:14:16.325 09:27:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:14:16.325 09:27:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:16.325 mke2fs 1.46.5 (30-Dec-2021) 00:14:16.587 Discarding device blocks: 0/522240 done 00:14:16.587 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:16.587 Filesystem UUID: 6bc13a0d-547d-4872-abe6-d7d1dad13e43 00:14:16.587 Superblock backups stored on blocks: 00:14:16.587 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:16.587 00:14:16.587 Allocating group tables: 0/64 done 00:14:16.587 Writing inode tables: 0/64 done 00:14:16.587 Creating journal (8192 blocks): done 00:14:16.587 Writing superblocks and filesystem accounting information: 0/64 done 00:14:16.587 00:14:16.587 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:14:16.587 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 164235 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:16.848 00:14:16.848 real 0m0.434s 00:14:16.848 user 0m0.022s 00:14:16.848 sys 0m0.072s 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:16.848 ************************************ 00:14:16.848 END TEST filesystem_ext4 00:14:16.848 ************************************ 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:16.848 ************************************ 00:14:16.848 START TEST filesystem_btrfs 00:14:16.848 ************************************ 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:14:16.848 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:17.420 btrfs-progs v6.6.2 00:14:17.420 See https://btrfs.readthedocs.io for more information. 00:14:17.420 00:14:17.420 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:17.420 NOTE: several default settings have changed in version 5.15, please make sure 00:14:17.420 this does not affect your deployments: 00:14:17.420 - DUP for metadata (-m dup) 00:14:17.420 - enabled no-holes (-O no-holes) 00:14:17.420 - enabled free-space-tree (-R free-space-tree) 00:14:17.420 00:14:17.420 Label: (null) 00:14:17.420 UUID: 7438c3bb-50b8-4a13-8724-9537ab162a46 00:14:17.420 Node size: 16384 00:14:17.420 Sector size: 4096 00:14:17.420 Filesystem size: 510.00MiB 00:14:17.420 Block group profiles: 00:14:17.420 Data: single 8.00MiB 00:14:17.420 Metadata: DUP 32.00MiB 00:14:17.420 System: DUP 8.00MiB 00:14:17.420 SSD detected: yes 00:14:17.420 Zoned device: no 00:14:17.420 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:14:17.420 Runtime features: free-space-tree 00:14:17.420 Checksum: crc32c 00:14:17.420 Number of devices: 1 00:14:17.420 Devices: 00:14:17.420 ID SIZE PATH 00:14:17.420 1 510.00MiB /dev/nvme0n1p1 00:14:17.420 00:14:17.420 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:14:17.420 09:27:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 164235 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:17.681 00:14:17.681 real 0m0.699s 00:14:17.681 user 0m0.039s 00:14:17.681 sys 0m0.142s 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:17.681 ************************************ 00:14:17.681 END TEST filesystem_btrfs 00:14:17.681 ************************************ 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:17.681 ************************************ 00:14:17.681 START TEST filesystem_xfs 00:14:17.681 ************************************ 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:14:17.681 09:27:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:17.947 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:17.947 = sectsz=512 attr=2, projid32bit=1 00:14:17.947 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:17.947 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:17.947 data = bsize=4096 blocks=130560, imaxpct=25 00:14:17.947 = sunit=0 swidth=0 blks 00:14:17.947 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:17.947 log =internal log bsize=4096 blocks=16384, version=2 00:14:17.947 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:17.947 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:18.524 Discarding blocks...Done. 00:14:18.524 09:27:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:14:18.524 09:27:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:21.823 09:27:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:21.823 09:27:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:21.823 09:27:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:21.823 09:27:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:21.823 09:27:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:21.823 09:27:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:21.823 09:27:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 164235 00:14:21.823 09:27:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:21.823 09:27:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:21.823 09:27:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:21.823 09:27:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:21.823 00:14:21.823 real 0m3.742s 00:14:21.823 user 0m0.018s 00:14:21.823 sys 0m0.122s 00:14:21.823 09:27:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:21.823 09:27:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:21.823 ************************************ 00:14:21.823 END TEST filesystem_xfs 00:14:21.823 ************************************ 00:14:21.823 09:27:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 164235 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 164235 ']' 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 164235 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:21.823 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 164235 00:14:21.824 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:21.824 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:21.824 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 164235' 00:14:21.824 killing process with pid 164235 00:14:21.824 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 164235 00:14:21.824 [2024-05-16 09:27:15.247641] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:21.824 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 164235 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:22.085 00:14:22.085 real 0m11.848s 00:14:22.085 user 0m46.619s 00:14:22.085 sys 0m1.250s 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:22.085 ************************************ 00:14:22.085 END TEST nvmf_filesystem_no_in_capsule 00:14:22.085 ************************************ 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:22.085 ************************************ 00:14:22.085 START TEST nvmf_filesystem_in_capsule 00:14:22.085 ************************************ 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=167285 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 167285 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 167285 ']' 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:22.085 09:27:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:22.085 [2024-05-16 09:27:15.617210] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:14:22.085 [2024-05-16 09:27:15.617254] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.346 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.346 [2024-05-16 09:27:15.681423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:22.346 [2024-05-16 09:27:15.746119] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.346 [2024-05-16 09:27:15.746158] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.346 [2024-05-16 09:27:15.746166] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.346 [2024-05-16 09:27:15.746173] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.346 [2024-05-16 09:27:15.746178] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.346 [2024-05-16 09:27:15.746313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.346 [2024-05-16 09:27:15.746431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.346 [2024-05-16 09:27:15.746584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.346 [2024-05-16 09:27:15.746584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:22.918 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:22.918 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:14:22.918 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:22.918 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:22.918 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:22.918 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.918 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:22.918 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:14:22.918 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.918 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:22.918 [2024-05-16 09:27:16.437718] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.918 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.918 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:22.918 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.918 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:23.179 Malloc1 00:14:23.179 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.179 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:23.179 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.179 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:23.179 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.179 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:23.180 [2024-05-16 09:27:16.562089] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:23.180 [2024-05-16 09:27:16.562329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:14:23.180 { 00:14:23.180 "name": "Malloc1", 00:14:23.180 "aliases": [ 00:14:23.180 "fad27276-8841-42ff-94bf-1aec6f9a2ed6" 00:14:23.180 ], 00:14:23.180 "product_name": "Malloc disk", 00:14:23.180 "block_size": 512, 00:14:23.180 "num_blocks": 1048576, 00:14:23.180 "uuid": "fad27276-8841-42ff-94bf-1aec6f9a2ed6", 00:14:23.180 "assigned_rate_limits": { 00:14:23.180 "rw_ios_per_sec": 0, 00:14:23.180 "rw_mbytes_per_sec": 0, 00:14:23.180 "r_mbytes_per_sec": 0, 00:14:23.180 "w_mbytes_per_sec": 0 00:14:23.180 }, 00:14:23.180 "claimed": true, 00:14:23.180 "claim_type": "exclusive_write", 00:14:23.180 "zoned": false, 00:14:23.180 "supported_io_types": { 00:14:23.180 "read": true, 00:14:23.180 "write": true, 00:14:23.180 "unmap": true, 00:14:23.180 "write_zeroes": true, 00:14:23.180 "flush": true, 00:14:23.180 "reset": true, 00:14:23.180 "compare": false, 00:14:23.180 "compare_and_write": false, 00:14:23.180 "abort": true, 00:14:23.180 "nvme_admin": false, 00:14:23.180 "nvme_io": false 00:14:23.180 }, 00:14:23.180 "memory_domains": [ 00:14:23.180 { 00:14:23.180 "dma_device_id": "system", 00:14:23.180 "dma_device_type": 1 00:14:23.180 }, 00:14:23.180 { 00:14:23.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.180 "dma_device_type": 2 00:14:23.180 } 00:14:23.180 ], 00:14:23.180 "driver_specific": {} 00:14:23.180 } 00:14:23.180 ]' 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:23.180 09:27:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:25.096 09:27:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:25.096 09:27:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:14:25.096 09:27:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:25.096 09:27:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:25.096 09:27:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:14:27.012 09:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:27.012 09:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:27.012 09:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:27.012 09:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:27.012 09:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:27.012 09:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:14:27.012 09:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:27.012 09:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:27.012 09:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:27.012 09:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:27.013 09:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:27.013 09:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:27.013 09:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:27.013 09:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:27.013 09:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:27.013 09:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:27.013 09:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:27.013 09:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:27.585 09:27:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:28.974 09:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:14:28.974 09:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:28.974 09:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:28.974 09:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:28.974 09:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:28.974 ************************************ 00:14:28.974 START TEST filesystem_in_capsule_ext4 00:14:28.974 ************************************ 00:14:28.974 09:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:28.974 09:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:28.974 09:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:28.974 09:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:28.974 09:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:14:28.974 09:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:14:28.974 09:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:14:28.974 09:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:14:28.974 09:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:14:28.974 09:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:14:28.974 09:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:28.974 mke2fs 1.46.5 (30-Dec-2021) 00:14:28.974 Discarding device blocks: 0/522240 done 00:14:28.974 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:28.974 Filesystem UUID: 22858df7-2dbc-4aa6-8885-bc21f531cc4e 00:14:28.974 Superblock backups stored on blocks: 00:14:28.974 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:28.974 00:14:28.974 Allocating group tables: 0/64 done 00:14:28.974 Writing inode tables: 0/64 done 00:14:28.974 Creating journal (8192 blocks): done 00:14:29.809 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:14:29.809 00:14:29.809 09:27:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:14:29.809 09:27:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:30.766 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:30.766 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:14:30.766 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:30.766 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:14:30.766 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:30.766 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:30.766 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 167285 00:14:30.766 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:30.766 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:30.766 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:30.766 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:30.766 00:14:30.766 real 0m2.124s 00:14:30.766 user 0m0.027s 00:14:30.766 sys 0m0.070s 00:14:30.766 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:30.766 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:30.766 ************************************ 00:14:30.766 END TEST filesystem_in_capsule_ext4 00:14:30.766 ************************************ 00:14:31.027 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:31.027 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:31.027 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:31.027 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:31.027 ************************************ 00:14:31.027 START TEST filesystem_in_capsule_btrfs 00:14:31.027 ************************************ 00:14:31.027 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:31.027 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:31.027 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:31.027 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:31.027 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:14:31.027 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:14:31.027 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:14:31.027 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:14:31.027 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:14:31.027 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:14:31.027 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:31.027 btrfs-progs v6.6.2 00:14:31.027 See https://btrfs.readthedocs.io for more information. 00:14:31.027 00:14:31.027 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:31.028 NOTE: several default settings have changed in version 5.15, please make sure 00:14:31.028 this does not affect your deployments: 00:14:31.028 - DUP for metadata (-m dup) 00:14:31.028 - enabled no-holes (-O no-holes) 00:14:31.028 - enabled free-space-tree (-R free-space-tree) 00:14:31.028 00:14:31.028 Label: (null) 00:14:31.028 UUID: 536a9600-61d0-4609-bb28-9c851eb17d5e 00:14:31.028 Node size: 16384 00:14:31.028 Sector size: 4096 00:14:31.028 Filesystem size: 510.00MiB 00:14:31.028 Block group profiles: 00:14:31.028 Data: single 8.00MiB 00:14:31.028 Metadata: DUP 32.00MiB 00:14:31.028 System: DUP 8.00MiB 00:14:31.028 SSD detected: yes 00:14:31.028 Zoned device: no 00:14:31.028 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:14:31.028 Runtime features: free-space-tree 00:14:31.028 Checksum: crc32c 00:14:31.028 Number of devices: 1 00:14:31.028 Devices: 00:14:31.028 ID SIZE PATH 00:14:31.028 1 510.00MiB /dev/nvme0n1p1 00:14:31.028 00:14:31.028 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:14:31.028 09:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:31.970 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:31.970 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:14:31.970 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:31.970 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:14:31.970 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:31.970 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:32.231 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 167285 00:14:32.231 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:32.231 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:32.231 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:32.231 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:32.231 00:14:32.231 real 0m1.176s 00:14:32.231 user 0m0.033s 00:14:32.231 sys 0m0.125s 00:14:32.231 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:32.231 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:32.231 ************************************ 00:14:32.231 END TEST filesystem_in_capsule_btrfs 00:14:32.232 ************************************ 00:14:32.232 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:14:32.232 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:32.232 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:32.232 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:32.232 ************************************ 00:14:32.232 START TEST filesystem_in_capsule_xfs 00:14:32.232 ************************************ 00:14:32.232 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:14:32.232 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:32.232 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:32.232 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:32.232 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:14:32.232 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:14:32.232 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:14:32.232 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:14:32.232 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:14:32.232 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:14:32.232 09:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:32.232 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:32.232 = sectsz=512 attr=2, projid32bit=1 00:14:32.232 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:32.232 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:32.232 data = bsize=4096 blocks=130560, imaxpct=25 00:14:32.232 = sunit=0 swidth=0 blks 00:14:32.232 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:32.232 log =internal log bsize=4096 blocks=16384, version=2 00:14:32.232 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:32.232 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:33.176 Discarding blocks...Done. 00:14:33.177 09:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:14:33.177 09:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:35.729 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:35.729 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:35.729 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:35.729 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:35.729 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:35.729 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:35.729 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 167285 00:14:35.729 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:35.729 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:35.729 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:35.729 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:35.729 00:14:35.729 real 0m3.605s 00:14:35.729 user 0m0.037s 00:14:35.729 sys 0m0.067s 00:14:35.729 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:35.729 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:35.729 ************************************ 00:14:35.729 END TEST filesystem_in_capsule_xfs 00:14:35.729 ************************************ 00:14:35.729 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:35.990 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:35.990 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:35.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.990 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:35.990 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:14:35.990 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:35.990 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:35.990 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:35.990 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:35.990 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:14:35.990 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:35.990 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.990 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:36.252 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.252 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:36.252 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 167285 00:14:36.252 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 167285 ']' 00:14:36.252 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 167285 00:14:36.252 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:14:36.252 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:36.252 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 167285 00:14:36.252 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:36.252 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:36.252 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 167285' 00:14:36.252 killing process with pid 167285 00:14:36.252 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 167285 00:14:36.252 [2024-05-16 09:27:29.607929] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:36.252 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 167285 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:36.513 00:14:36.513 real 0m14.286s 00:14:36.513 user 0m56.395s 00:14:36.513 sys 0m1.214s 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:36.513 ************************************ 00:14:36.513 END TEST nvmf_filesystem_in_capsule 00:14:36.513 ************************************ 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:36.513 rmmod nvme_tcp 00:14:36.513 rmmod nvme_fabrics 00:14:36.513 rmmod nvme_keyring 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.513 09:27:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.061 09:27:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:39.061 00:14:39.061 real 0m35.769s 00:14:39.061 user 1m45.151s 00:14:39.061 sys 0m7.882s 00:14:39.061 09:27:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:39.061 09:27:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:39.061 ************************************ 00:14:39.061 END TEST nvmf_filesystem 00:14:39.061 ************************************ 00:14:39.061 09:27:32 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:39.061 09:27:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:39.061 09:27:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:39.061 09:27:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:39.061 ************************************ 00:14:39.061 START TEST nvmf_target_discovery 00:14:39.061 ************************************ 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:39.061 * Looking for test storage... 00:14:39.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:14:39.061 09:27:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:45.650 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:45.650 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:45.650 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:45.650 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.650 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:45.651 09:27:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:45.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:14:45.651 00:14:45.651 --- 10.0.0.2 ping statistics --- 00:14:45.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.651 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:45.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:14:45.651 00:14:45.651 --- 10.0.0.1 ping statistics --- 00:14:45.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.651 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=174270 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 174270 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 174270 ']' 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:45.651 09:27:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:45.651 [2024-05-16 09:27:39.139586] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:14:45.651 [2024-05-16 09:27:39.139649] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.651 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.912 [2024-05-16 09:27:39.210980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.912 [2024-05-16 09:27:39.286969] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.912 [2024-05-16 09:27:39.287008] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.912 [2024-05-16 09:27:39.287015] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.912 [2024-05-16 09:27:39.287022] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.912 [2024-05-16 09:27:39.287028] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.912 [2024-05-16 09:27:39.287095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.913 [2024-05-16 09:27:39.287309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.913 [2024-05-16 09:27:39.287310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.913 [2024-05-16 09:27:39.287167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.485 09:27:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:46.485 09:27:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:14:46.485 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:46.485 09:27:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:46.485 09:27:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.485 09:27:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.485 09:27:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:46.485 09:27:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.485 09:27:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.485 [2024-05-16 09:27:39.975634] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.485 09:27:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.485 09:27:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:46.485 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:46.485 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:46.485 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.485 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.485 Null1 00:14:46.485 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.485 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:46.485 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.485 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.485 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.485 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:46.485 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.485 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.485 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.485 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:46.485 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.485 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.746 [2024-05-16 09:27:40.047814] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:46.746 [2024-05-16 09:27:40.048019] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.746 Null2 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.746 Null3 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.746 Null4 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.746 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.747 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.747 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:46.747 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.747 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.747 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.747 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:46.747 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.747 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:46.747 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.747 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 4420 00:14:47.007 00:14:47.007 Discovery Log Number of Records 6, Generation counter 6 00:14:47.007 =====Discovery Log Entry 0====== 00:14:47.007 trtype: tcp 00:14:47.007 adrfam: ipv4 00:14:47.007 subtype: current discovery subsystem 00:14:47.007 treq: not required 00:14:47.007 portid: 0 00:14:47.007 trsvcid: 4420 00:14:47.007 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:47.007 traddr: 10.0.0.2 00:14:47.007 eflags: explicit discovery connections, duplicate discovery information 00:14:47.007 sectype: none 00:14:47.007 =====Discovery Log Entry 1====== 00:14:47.007 trtype: tcp 00:14:47.007 adrfam: ipv4 00:14:47.007 subtype: nvme subsystem 00:14:47.007 treq: not required 00:14:47.007 portid: 0 00:14:47.007 trsvcid: 4420 00:14:47.007 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:47.007 traddr: 10.0.0.2 00:14:47.007 eflags: none 00:14:47.007 sectype: none 00:14:47.007 =====Discovery Log Entry 2====== 00:14:47.007 trtype: tcp 00:14:47.007 adrfam: ipv4 00:14:47.007 subtype: nvme subsystem 00:14:47.007 treq: not required 00:14:47.007 portid: 0 00:14:47.007 trsvcid: 4420 00:14:47.007 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:47.007 traddr: 10.0.0.2 00:14:47.007 eflags: none 00:14:47.007 sectype: none 00:14:47.007 =====Discovery Log Entry 3====== 00:14:47.007 trtype: tcp 00:14:47.007 adrfam: ipv4 00:14:47.007 subtype: nvme subsystem 00:14:47.007 treq: not required 00:14:47.007 portid: 0 00:14:47.007 trsvcid: 4420 00:14:47.007 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:47.007 traddr: 10.0.0.2 00:14:47.007 eflags: none 00:14:47.007 sectype: none 00:14:47.007 =====Discovery Log Entry 4====== 00:14:47.007 trtype: tcp 00:14:47.007 adrfam: ipv4 00:14:47.007 subtype: nvme subsystem 00:14:47.007 treq: not required 00:14:47.007 portid: 0 00:14:47.007 trsvcid: 4420 00:14:47.007 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:47.007 traddr: 10.0.0.2 00:14:47.007 eflags: none 00:14:47.007 sectype: none 00:14:47.007 =====Discovery Log Entry 5====== 00:14:47.007 trtype: tcp 00:14:47.007 adrfam: ipv4 00:14:47.007 subtype: discovery subsystem referral 00:14:47.007 treq: not required 00:14:47.008 portid: 0 00:14:47.008 trsvcid: 4430 00:14:47.008 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:47.008 traddr: 10.0.0.2 00:14:47.008 eflags: none 00:14:47.008 sectype: none 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:47.008 Perform nvmf subsystem discovery via RPC 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.008 [ 00:14:47.008 { 00:14:47.008 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:47.008 "subtype": "Discovery", 00:14:47.008 "listen_addresses": [ 00:14:47.008 { 00:14:47.008 "trtype": "TCP", 00:14:47.008 "adrfam": "IPv4", 00:14:47.008 "traddr": "10.0.0.2", 00:14:47.008 "trsvcid": "4420" 00:14:47.008 } 00:14:47.008 ], 00:14:47.008 "allow_any_host": true, 00:14:47.008 "hosts": [] 00:14:47.008 }, 00:14:47.008 { 00:14:47.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.008 "subtype": "NVMe", 00:14:47.008 "listen_addresses": [ 00:14:47.008 { 00:14:47.008 "trtype": "TCP", 00:14:47.008 "adrfam": "IPv4", 00:14:47.008 "traddr": "10.0.0.2", 00:14:47.008 "trsvcid": "4420" 00:14:47.008 } 00:14:47.008 ], 00:14:47.008 "allow_any_host": true, 00:14:47.008 "hosts": [], 00:14:47.008 "serial_number": "SPDK00000000000001", 00:14:47.008 "model_number": "SPDK bdev Controller", 00:14:47.008 "max_namespaces": 32, 00:14:47.008 "min_cntlid": 1, 00:14:47.008 "max_cntlid": 65519, 00:14:47.008 "namespaces": [ 00:14:47.008 { 00:14:47.008 "nsid": 1, 00:14:47.008 "bdev_name": "Null1", 00:14:47.008 "name": "Null1", 00:14:47.008 "nguid": "2BCC5F0C7CD649E3BB71325C87C31826", 00:14:47.008 "uuid": "2bcc5f0c-7cd6-49e3-bb71-325c87c31826" 00:14:47.008 } 00:14:47.008 ] 00:14:47.008 }, 00:14:47.008 { 00:14:47.008 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:47.008 "subtype": "NVMe", 00:14:47.008 "listen_addresses": [ 00:14:47.008 { 00:14:47.008 "trtype": "TCP", 00:14:47.008 "adrfam": "IPv4", 00:14:47.008 "traddr": "10.0.0.2", 00:14:47.008 "trsvcid": "4420" 00:14:47.008 } 00:14:47.008 ], 00:14:47.008 "allow_any_host": true, 00:14:47.008 "hosts": [], 00:14:47.008 "serial_number": "SPDK00000000000002", 00:14:47.008 "model_number": "SPDK bdev Controller", 00:14:47.008 "max_namespaces": 32, 00:14:47.008 "min_cntlid": 1, 00:14:47.008 "max_cntlid": 65519, 00:14:47.008 "namespaces": [ 00:14:47.008 { 00:14:47.008 "nsid": 1, 00:14:47.008 "bdev_name": "Null2", 00:14:47.008 "name": "Null2", 00:14:47.008 "nguid": "61C53CB696BF4E6AA5532059BF8F71F3", 00:14:47.008 "uuid": "61c53cb6-96bf-4e6a-a553-2059bf8f71f3" 00:14:47.008 } 00:14:47.008 ] 00:14:47.008 }, 00:14:47.008 { 00:14:47.008 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:47.008 "subtype": "NVMe", 00:14:47.008 "listen_addresses": [ 00:14:47.008 { 00:14:47.008 "trtype": "TCP", 00:14:47.008 "adrfam": "IPv4", 00:14:47.008 "traddr": "10.0.0.2", 00:14:47.008 "trsvcid": "4420" 00:14:47.008 } 00:14:47.008 ], 00:14:47.008 "allow_any_host": true, 00:14:47.008 "hosts": [], 00:14:47.008 "serial_number": "SPDK00000000000003", 00:14:47.008 "model_number": "SPDK bdev Controller", 00:14:47.008 "max_namespaces": 32, 00:14:47.008 "min_cntlid": 1, 00:14:47.008 "max_cntlid": 65519, 00:14:47.008 "namespaces": [ 00:14:47.008 { 00:14:47.008 "nsid": 1, 00:14:47.008 "bdev_name": "Null3", 00:14:47.008 "name": "Null3", 00:14:47.008 "nguid": "A6884E0FBC724B939410C4DAF8BD45BF", 00:14:47.008 "uuid": "a6884e0f-bc72-4b93-9410-c4daf8bd45bf" 00:14:47.008 } 00:14:47.008 ] 00:14:47.008 }, 00:14:47.008 { 00:14:47.008 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:47.008 "subtype": "NVMe", 00:14:47.008 "listen_addresses": [ 00:14:47.008 { 00:14:47.008 "trtype": "TCP", 00:14:47.008 "adrfam": "IPv4", 00:14:47.008 "traddr": "10.0.0.2", 00:14:47.008 "trsvcid": "4420" 00:14:47.008 } 00:14:47.008 ], 00:14:47.008 "allow_any_host": true, 00:14:47.008 "hosts": [], 00:14:47.008 "serial_number": "SPDK00000000000004", 00:14:47.008 "model_number": "SPDK bdev Controller", 00:14:47.008 "max_namespaces": 32, 00:14:47.008 "min_cntlid": 1, 00:14:47.008 "max_cntlid": 65519, 00:14:47.008 "namespaces": [ 00:14:47.008 { 00:14:47.008 "nsid": 1, 00:14:47.008 "bdev_name": "Null4", 00:14:47.008 "name": "Null4", 00:14:47.008 "nguid": "2476BFC735FE49918538E082EAF6E08A", 00:14:47.008 "uuid": "2476bfc7-35fe-4991-8538-e082eaf6e08a" 00:14:47.008 } 00:14:47.008 ] 00:14:47.008 } 00:14:47.008 ] 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:47.008 09:27:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:47.268 rmmod nvme_tcp 00:14:47.268 rmmod nvme_fabrics 00:14:47.268 rmmod nvme_keyring 00:14:47.268 09:27:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:47.268 09:27:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:14:47.268 09:27:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:14:47.268 09:27:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 174270 ']' 00:14:47.268 09:27:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 174270 00:14:47.268 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 174270 ']' 00:14:47.268 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 174270 00:14:47.268 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:14:47.268 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:47.268 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 174270 00:14:47.268 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:47.268 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:47.268 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 174270' 00:14:47.269 killing process with pid 174270 00:14:47.269 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 174270 00:14:47.269 [2024-05-16 09:27:40.679927] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:47.269 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 174270 00:14:47.269 09:27:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:47.269 09:27:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:47.269 09:27:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:47.269 09:27:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:47.269 09:27:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:47.269 09:27:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.269 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.269 09:27:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.816 09:27:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:49.817 00:14:49.817 real 0m10.789s 00:14:49.817 user 0m8.138s 00:14:49.817 sys 0m5.497s 00:14:49.817 09:27:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:49.817 09:27:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:49.817 ************************************ 00:14:49.817 END TEST nvmf_target_discovery 00:14:49.817 ************************************ 00:14:49.817 09:27:42 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:49.817 09:27:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:49.817 09:27:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:49.817 09:27:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:49.817 ************************************ 00:14:49.817 START TEST nvmf_referrals 00:14:49.817 ************************************ 00:14:49.817 09:27:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:49.817 * Looking for test storage... 00:14:49.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:14:49.817 09:27:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:56.411 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:56.411 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:56.411 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:56.411 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:56.411 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:56.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.715 ms 00:14:56.412 00:14:56.412 --- 10.0.0.2 ping statistics --- 00:14:56.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.412 rtt min/avg/max/mdev = 0.715/0.715/0.715/0.000 ms 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:56.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:14:56.412 00:14:56.412 --- 10.0.0.1 ping statistics --- 00:14:56.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.412 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=178818 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 178818 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 178818 ']' 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:56.412 09:27:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.412 [2024-05-16 09:27:49.942669] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:14:56.412 [2024-05-16 09:27:49.942735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.673 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.673 [2024-05-16 09:27:50.014265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.673 [2024-05-16 09:27:50.092801] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.673 [2024-05-16 09:27:50.092840] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.673 [2024-05-16 09:27:50.092847] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.673 [2024-05-16 09:27:50.092854] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.673 [2024-05-16 09:27:50.092864] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.673 [2024-05-16 09:27:50.092925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.673 [2024-05-16 09:27:50.093042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.673 [2024-05-16 09:27:50.093202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.673 [2024-05-16 09:27:50.093202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.246 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:57.246 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:14:57.246 09:27:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.246 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:57.246 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.246 09:27:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.246 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:57.246 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.246 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.246 [2024-05-16 09:27:50.773640] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.246 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.246 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:57.246 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.246 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.246 [2024-05-16 09:27:50.789644] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:57.246 [2024-05-16 09:27:50.789854] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:57.246 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.246 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:57.246 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.246 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:57.508 09:27:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:57.770 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.032 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:58.294 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:58.294 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:58.294 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:58.294 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:58.294 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.294 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:58.555 09:27:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:58.555 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:58.555 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:58.555 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:58.555 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:58.555 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:58.555 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.555 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:58.817 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:58.817 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:58.817 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:58.817 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:58.817 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:58.817 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:58.817 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:58.817 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:58.817 09:27:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.817 09:27:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:58.817 09:27:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.817 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:58.817 09:27:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.817 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:58.817 09:27:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:58.817 09:27:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:59.078 rmmod nvme_tcp 00:14:59.078 rmmod nvme_fabrics 00:14:59.078 rmmod nvme_keyring 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 178818 ']' 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 178818 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 178818 ']' 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 178818 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 178818 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 178818' 00:14:59.078 killing process with pid 178818 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 178818 00:14:59.078 [2024-05-16 09:27:52.629123] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:59.078 09:27:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 178818 00:14:59.340 09:27:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:59.340 09:27:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:59.340 09:27:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:59.340 09:27:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:59.340 09:27:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:59.340 09:27:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.340 09:27:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.340 09:27:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.885 09:27:54 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:01.885 00:15:01.885 real 0m11.856s 00:15:01.885 user 0m13.549s 00:15:01.885 sys 0m5.649s 00:15:01.885 09:27:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:01.885 09:27:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:01.885 ************************************ 00:15:01.885 END TEST nvmf_referrals 00:15:01.885 ************************************ 00:15:01.885 09:27:54 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:01.885 09:27:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:01.885 09:27:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:01.885 09:27:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:01.885 ************************************ 00:15:01.885 START TEST nvmf_connect_disconnect 00:15:01.885 ************************************ 00:15:01.885 09:27:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:01.885 * Looking for test storage... 00:15:01.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.885 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:15:01.886 09:27:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:08.483 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:08.483 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:08.483 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:08.483 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:08.483 09:28:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:08.483 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:08.483 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:08.746 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:08.746 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:08.746 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:08.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:15:08.746 00:15:08.746 --- 10.0.0.2 ping statistics --- 00:15:08.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.746 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:15:08.746 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:08.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:15:08.746 00:15:08.746 --- 10.0.0.1 ping statistics --- 00:15:08.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.746 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=183637 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 183637 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 183637 ']' 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:08.747 09:28:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:08.747 [2024-05-16 09:28:02.239583] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:15:08.747 [2024-05-16 09:28:02.239647] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.747 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.008 [2024-05-16 09:28:02.310895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.008 [2024-05-16 09:28:02.386991] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.008 [2024-05-16 09:28:02.387029] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.008 [2024-05-16 09:28:02.387036] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.008 [2024-05-16 09:28:02.387043] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.008 [2024-05-16 09:28:02.387049] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.008 [2024-05-16 09:28:02.387192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.008 [2024-05-16 09:28:02.387308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.008 [2024-05-16 09:28:02.387462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.008 [2024-05-16 09:28:02.387463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.580 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:09.580 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:15:09.580 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:09.581 [2024-05-16 09:28:03.054814] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:09.581 [2024-05-16 09:28:03.114046] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:09.581 [2024-05-16 09:28:03.114269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:15:09.581 09:28:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:15:13.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.897 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:27.897 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:27.897 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:27.897 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:15:27.897 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:27.897 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:15:27.897 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.897 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:27.897 rmmod nvme_tcp 00:15:27.897 rmmod nvme_fabrics 00:15:27.897 rmmod nvme_keyring 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 183637 ']' 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 183637 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 183637 ']' 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 183637 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 183637 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 183637' 00:15:28.158 killing process with pid 183637 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 183637 00:15:28.158 [2024-05-16 09:28:21.526031] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 183637 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.158 09:28:21 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.708 09:28:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:30.708 00:15:30.708 real 0m28.821s 00:15:30.708 user 1m18.814s 00:15:30.708 sys 0m6.558s 00:15:30.708 09:28:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:30.708 09:28:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:30.708 ************************************ 00:15:30.708 END TEST nvmf_connect_disconnect 00:15:30.708 ************************************ 00:15:30.708 09:28:23 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:30.708 09:28:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:30.708 09:28:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:30.708 09:28:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:30.708 ************************************ 00:15:30.708 START TEST nvmf_multitarget 00:15:30.708 ************************************ 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:30.708 * Looking for test storage... 00:15:30.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:15:30.708 09:28:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:37.303 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:37.304 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:37.304 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:37.304 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:37.304 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:37.304 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:37.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:15:37.567 00:15:37.567 --- 10.0.0.2 ping statistics --- 00:15:37.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.567 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:37.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:15:37.567 00:15:37.567 --- 10.0.0.1 ping statistics --- 00:15:37.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.567 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=191644 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 191644 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 191644 ']' 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:37.567 09:28:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:37.567 [2024-05-16 09:28:31.007626] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:15:37.567 [2024-05-16 09:28:31.007694] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.567 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.567 [2024-05-16 09:28:31.079767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:37.829 [2024-05-16 09:28:31.155975] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.829 [2024-05-16 09:28:31.156016] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.829 [2024-05-16 09:28:31.156023] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.829 [2024-05-16 09:28:31.156030] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.829 [2024-05-16 09:28:31.156035] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.829 [2024-05-16 09:28:31.156113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.829 [2024-05-16 09:28:31.156227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.829 [2024-05-16 09:28:31.156387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.829 [2024-05-16 09:28:31.156388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.402 09:28:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:38.402 09:28:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:15:38.402 09:28:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:38.402 09:28:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:38.402 09:28:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:38.402 09:28:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.402 09:28:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:38.402 09:28:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:38.402 09:28:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:38.402 09:28:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:38.402 09:28:31 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:38.663 "nvmf_tgt_1" 00:15:38.663 09:28:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:38.663 "nvmf_tgt_2" 00:15:38.663 09:28:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:38.663 09:28:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:38.925 09:28:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:38.925 09:28:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:38.925 true 00:15:38.925 09:28:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:38.925 true 00:15:38.925 09:28:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:38.925 09:28:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:39.187 rmmod nvme_tcp 00:15:39.187 rmmod nvme_fabrics 00:15:39.187 rmmod nvme_keyring 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 191644 ']' 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 191644 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 191644 ']' 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 191644 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 191644 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 191644' 00:15:39.187 killing process with pid 191644 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 191644 00:15:39.187 09:28:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 191644 00:15:39.448 09:28:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:39.448 09:28:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:39.448 09:28:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:39.448 09:28:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.448 09:28:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:39.448 09:28:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.448 09:28:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.448 09:28:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.371 09:28:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:41.371 00:15:41.371 real 0m11.038s 00:15:41.371 user 0m9.310s 00:15:41.371 sys 0m5.588s 00:15:41.371 09:28:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:41.371 09:28:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:41.371 ************************************ 00:15:41.371 END TEST nvmf_multitarget 00:15:41.371 ************************************ 00:15:41.371 09:28:34 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:41.371 09:28:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:41.371 09:28:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:41.371 09:28:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:41.633 ************************************ 00:15:41.633 START TEST nvmf_rpc 00:15:41.633 ************************************ 00:15:41.633 09:28:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:41.633 * Looking for test storage... 00:15:41.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.633 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.634 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:41.634 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:41.634 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:41.634 09:28:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:41.634 09:28:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:41.634 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:41.634 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.634 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:41.634 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:41.634 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:41.634 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.634 09:28:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.634 09:28:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.634 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:41.634 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:41.634 09:28:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:15:41.634 09:28:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.787 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:49.787 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:15:49.787 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:49.787 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:49.787 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:49.787 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:49.787 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:49.787 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:15:49.787 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:49.787 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:15:49.787 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:15:49.787 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:15:49.787 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:15:49.787 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:49.788 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:49.788 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:49.788 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:49.788 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:49.788 09:28:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:49.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:15:49.788 00:15:49.788 --- 10.0.0.2 ping statistics --- 00:15:49.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.788 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:49.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:15:49.788 00:15:49.788 --- 10.0.0.1 ping statistics --- 00:15:49.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.788 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=196138 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 196138 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 196138 ']' 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:49.788 09:28:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.788 [2024-05-16 09:28:42.283129] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:15:49.788 [2024-05-16 09:28:42.283193] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.788 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.788 [2024-05-16 09:28:42.354158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:49.788 [2024-05-16 09:28:42.429618] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.788 [2024-05-16 09:28:42.429654] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.788 [2024-05-16 09:28:42.429661] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.788 [2024-05-16 09:28:42.429668] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.788 [2024-05-16 09:28:42.429673] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.788 [2024-05-16 09:28:42.429809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.788 [2024-05-16 09:28:42.429926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.788 [2024-05-16 09:28:42.430096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.788 [2024-05-16 09:28:42.430096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.788 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:49.788 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:15:49.788 09:28:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:49.789 "tick_rate": 2400000000, 00:15:49.789 "poll_groups": [ 00:15:49.789 { 00:15:49.789 "name": "nvmf_tgt_poll_group_000", 00:15:49.789 "admin_qpairs": 0, 00:15:49.789 "io_qpairs": 0, 00:15:49.789 "current_admin_qpairs": 0, 00:15:49.789 "current_io_qpairs": 0, 00:15:49.789 "pending_bdev_io": 0, 00:15:49.789 "completed_nvme_io": 0, 00:15:49.789 "transports": [] 00:15:49.789 }, 00:15:49.789 { 00:15:49.789 "name": "nvmf_tgt_poll_group_001", 00:15:49.789 "admin_qpairs": 0, 00:15:49.789 "io_qpairs": 0, 00:15:49.789 "current_admin_qpairs": 0, 00:15:49.789 "current_io_qpairs": 0, 00:15:49.789 "pending_bdev_io": 0, 00:15:49.789 "completed_nvme_io": 0, 00:15:49.789 "transports": [] 00:15:49.789 }, 00:15:49.789 { 00:15:49.789 "name": "nvmf_tgt_poll_group_002", 00:15:49.789 "admin_qpairs": 0, 00:15:49.789 "io_qpairs": 0, 00:15:49.789 "current_admin_qpairs": 0, 00:15:49.789 "current_io_qpairs": 0, 00:15:49.789 "pending_bdev_io": 0, 00:15:49.789 "completed_nvme_io": 0, 00:15:49.789 "transports": [] 00:15:49.789 }, 00:15:49.789 { 00:15:49.789 "name": "nvmf_tgt_poll_group_003", 00:15:49.789 "admin_qpairs": 0, 00:15:49.789 "io_qpairs": 0, 00:15:49.789 "current_admin_qpairs": 0, 00:15:49.789 "current_io_qpairs": 0, 00:15:49.789 "pending_bdev_io": 0, 00:15:49.789 "completed_nvme_io": 0, 00:15:49.789 "transports": [] 00:15:49.789 } 00:15:49.789 ] 00:15:49.789 }' 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.789 [2024-05-16 09:28:43.233892] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:49.789 "tick_rate": 2400000000, 00:15:49.789 "poll_groups": [ 00:15:49.789 { 00:15:49.789 "name": "nvmf_tgt_poll_group_000", 00:15:49.789 "admin_qpairs": 0, 00:15:49.789 "io_qpairs": 0, 00:15:49.789 "current_admin_qpairs": 0, 00:15:49.789 "current_io_qpairs": 0, 00:15:49.789 "pending_bdev_io": 0, 00:15:49.789 "completed_nvme_io": 0, 00:15:49.789 "transports": [ 00:15:49.789 { 00:15:49.789 "trtype": "TCP" 00:15:49.789 } 00:15:49.789 ] 00:15:49.789 }, 00:15:49.789 { 00:15:49.789 "name": "nvmf_tgt_poll_group_001", 00:15:49.789 "admin_qpairs": 0, 00:15:49.789 "io_qpairs": 0, 00:15:49.789 "current_admin_qpairs": 0, 00:15:49.789 "current_io_qpairs": 0, 00:15:49.789 "pending_bdev_io": 0, 00:15:49.789 "completed_nvme_io": 0, 00:15:49.789 "transports": [ 00:15:49.789 { 00:15:49.789 "trtype": "TCP" 00:15:49.789 } 00:15:49.789 ] 00:15:49.789 }, 00:15:49.789 { 00:15:49.789 "name": "nvmf_tgt_poll_group_002", 00:15:49.789 "admin_qpairs": 0, 00:15:49.789 "io_qpairs": 0, 00:15:49.789 "current_admin_qpairs": 0, 00:15:49.789 "current_io_qpairs": 0, 00:15:49.789 "pending_bdev_io": 0, 00:15:49.789 "completed_nvme_io": 0, 00:15:49.789 "transports": [ 00:15:49.789 { 00:15:49.789 "trtype": "TCP" 00:15:49.789 } 00:15:49.789 ] 00:15:49.789 }, 00:15:49.789 { 00:15:49.789 "name": "nvmf_tgt_poll_group_003", 00:15:49.789 "admin_qpairs": 0, 00:15:49.789 "io_qpairs": 0, 00:15:49.789 "current_admin_qpairs": 0, 00:15:49.789 "current_io_qpairs": 0, 00:15:49.789 "pending_bdev_io": 0, 00:15:49.789 "completed_nvme_io": 0, 00:15:49.789 "transports": [ 00:15:49.789 { 00:15:49.789 "trtype": "TCP" 00:15:49.789 } 00:15:49.789 ] 00:15:49.789 } 00:15:49.789 ] 00:15:49.789 }' 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:49.789 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.051 Malloc1 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.051 [2024-05-16 09:28:43.425490] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:50.051 [2024-05-16 09:28:43.425711] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.2 -s 4420 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.2 -s 4420 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.2 -s 4420 00:15:50.051 [2024-05-16 09:28:43.452546] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204' 00:15:50.051 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:50.051 could not add new controller: failed to write to nvme-fabrics device 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.051 09:28:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:51.965 09:28:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:51.965 09:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:15:51.965 09:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:51.965 09:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:51.965 09:28:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:15:53.880 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:53.880 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:53.880 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:53.880 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:53.880 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:53.880 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:15:53.880 09:28:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:53.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.880 09:28:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:53.880 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:15:53.880 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:53.880 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:53.880 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:53.880 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:53.880 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:15:53.880 09:28:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:53.880 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.880 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:53.881 [2024-05-16 09:28:47.199473] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204' 00:15:53.881 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:53.881 could not add new controller: failed to write to nvme-fabrics device 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.881 09:28:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:55.268 09:28:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:55.268 09:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:15:55.268 09:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:55.268 09:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:55.268 09:28:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:57.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.839 [2024-05-16 09:28:50.936342] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.839 09:28:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:59.222 09:28:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:59.222 09:28:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:15:59.222 09:28:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.222 09:28:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:59.222 09:28:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:01.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.135 [2024-05-16 09:28:54.627274] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.135 09:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.136 09:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:03.048 09:28:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:03.048 09:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:16:03.048 09:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:03.048 09:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:16:03.048 09:28:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:04.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.964 [2024-05-16 09:28:58.335034] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.964 09:28:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:06.880 09:28:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:06.880 09:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:16:06.880 09:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:06.880 09:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:16:06.880 09:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:16:08.797 09:29:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:08.797 09:29:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:08.797 09:29:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:08.797 09:29:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:08.797 09:29:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.797 09:29:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:16:08.797 09:29:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:08.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.797 [2024-05-16 09:29:02.076382] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.797 09:29:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:10.186 09:29:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:10.186 09:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:16:10.186 09:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:10.186 09:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:16:10.186 09:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:16:12.119 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:12.119 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:12.119 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:12.119 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:12.119 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:12.119 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:16:12.119 09:29:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:12.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.380 [2024-05-16 09:29:05.760190] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.380 09:29:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:14.296 09:29:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:14.297 09:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:16:14.297 09:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:14.297 09:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:16:14.297 09:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:16:16.214 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:16.214 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:16.214 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:16.214 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:16.214 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:16.214 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:16:16.214 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:16.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.214 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:16.214 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:16:16.214 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:16.214 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:16.214 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:16.214 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:16.214 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:16:16.214 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:16.214 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 [2024-05-16 09:29:09.500835] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 [2024-05-16 09:29:09.560972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 [2024-05-16 09:29:09.621134] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 [2024-05-16 09:29:09.677331] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 [2024-05-16 09:29:09.737537] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:16.216 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.216 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.216 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.216 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:16.216 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.216 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.216 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.216 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.216 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.216 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:16.478 "tick_rate": 2400000000, 00:16:16.478 "poll_groups": [ 00:16:16.478 { 00:16:16.478 "name": "nvmf_tgt_poll_group_000", 00:16:16.478 "admin_qpairs": 0, 00:16:16.478 "io_qpairs": 224, 00:16:16.478 "current_admin_qpairs": 0, 00:16:16.478 "current_io_qpairs": 0, 00:16:16.478 "pending_bdev_io": 0, 00:16:16.478 "completed_nvme_io": 380, 00:16:16.478 "transports": [ 00:16:16.478 { 00:16:16.478 "trtype": "TCP" 00:16:16.478 } 00:16:16.478 ] 00:16:16.478 }, 00:16:16.478 { 00:16:16.478 "name": "nvmf_tgt_poll_group_001", 00:16:16.478 "admin_qpairs": 1, 00:16:16.478 "io_qpairs": 223, 00:16:16.478 "current_admin_qpairs": 0, 00:16:16.478 "current_io_qpairs": 0, 00:16:16.478 "pending_bdev_io": 0, 00:16:16.478 "completed_nvme_io": 269, 00:16:16.478 "transports": [ 00:16:16.478 { 00:16:16.478 "trtype": "TCP" 00:16:16.478 } 00:16:16.478 ] 00:16:16.478 }, 00:16:16.478 { 00:16:16.478 "name": "nvmf_tgt_poll_group_002", 00:16:16.478 "admin_qpairs": 6, 00:16:16.478 "io_qpairs": 218, 00:16:16.478 "current_admin_qpairs": 0, 00:16:16.478 "current_io_qpairs": 0, 00:16:16.478 "pending_bdev_io": 0, 00:16:16.478 "completed_nvme_io": 365, 00:16:16.478 "transports": [ 00:16:16.478 { 00:16:16.478 "trtype": "TCP" 00:16:16.478 } 00:16:16.478 ] 00:16:16.478 }, 00:16:16.478 { 00:16:16.478 "name": "nvmf_tgt_poll_group_003", 00:16:16.478 "admin_qpairs": 0, 00:16:16.478 "io_qpairs": 224, 00:16:16.478 "current_admin_qpairs": 0, 00:16:16.478 "current_io_qpairs": 0, 00:16:16.478 "pending_bdev_io": 0, 00:16:16.478 "completed_nvme_io": 225, 00:16:16.478 "transports": [ 00:16:16.478 { 00:16:16.478 "trtype": "TCP" 00:16:16.478 } 00:16:16.478 ] 00:16:16.478 } 00:16:16.478 ] 00:16:16.478 }' 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:16.478 rmmod nvme_tcp 00:16:16.478 rmmod nvme_fabrics 00:16:16.478 rmmod nvme_keyring 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 196138 ']' 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 196138 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 196138 ']' 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 196138 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:16.478 09:29:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 196138 00:16:16.478 09:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:16.478 09:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:16.478 09:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 196138' 00:16:16.478 killing process with pid 196138 00:16:16.478 09:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 196138 00:16:16.478 [2024-05-16 09:29:10.016527] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:16.478 09:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 196138 00:16:16.739 09:29:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:16.739 09:29:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:16.739 09:29:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:16.739 09:29:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.739 09:29:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:16.739 09:29:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.739 09:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.739 09:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.286 09:29:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:19.286 00:16:19.286 real 0m37.284s 00:16:19.286 user 1m52.918s 00:16:19.286 sys 0m7.034s 00:16:19.286 09:29:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:19.286 09:29:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:19.286 ************************************ 00:16:19.286 END TEST nvmf_rpc 00:16:19.286 ************************************ 00:16:19.286 09:29:12 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:19.286 09:29:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:19.286 09:29:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:19.286 09:29:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:19.286 ************************************ 00:16:19.286 START TEST nvmf_invalid 00:16:19.286 ************************************ 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:19.286 * Looking for test storage... 00:16:19.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.286 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:16:19.287 09:29:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:25.880 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:25.880 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:25.880 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:25.880 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:25.880 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:26.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:16:26.143 00:16:26.143 --- 10.0.0.2 ping statistics --- 00:16:26.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.143 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:26.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:26.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:16:26.143 00:16:26.143 --- 10.0.0.1 ping statistics --- 00:16:26.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.143 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=205987 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 205987 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 205987 ']' 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:26.143 09:29:19 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:26.143 [2024-05-16 09:29:19.568920] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:26.143 [2024-05-16 09:29:19.568986] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.143 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.143 [2024-05-16 09:29:19.639500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.403 [2024-05-16 09:29:19.714854] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.403 [2024-05-16 09:29:19.714891] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.403 [2024-05-16 09:29:19.714899] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.403 [2024-05-16 09:29:19.714905] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.403 [2024-05-16 09:29:19.714911] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.403 [2024-05-16 09:29:19.715048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.403 [2024-05-16 09:29:19.715190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.403 [2024-05-16 09:29:19.715450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:26.403 [2024-05-16 09:29:19.715451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.974 09:29:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:26.974 09:29:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:16:26.974 09:29:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:26.974 09:29:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:26.974 09:29:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:26.974 09:29:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.974 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:26.974 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17402 00:16:26.975 [2024-05-16 09:29:20.534019] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:27.236 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:27.236 { 00:16:27.236 "nqn": "nqn.2016-06.io.spdk:cnode17402", 00:16:27.236 "tgt_name": "foobar", 00:16:27.236 "method": "nvmf_create_subsystem", 00:16:27.236 "req_id": 1 00:16:27.236 } 00:16:27.236 Got JSON-RPC error response 00:16:27.236 response: 00:16:27.236 { 00:16:27.236 "code": -32603, 00:16:27.236 "message": "Unable to find target foobar" 00:16:27.236 }' 00:16:27.236 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:27.236 { 00:16:27.236 "nqn": "nqn.2016-06.io.spdk:cnode17402", 00:16:27.236 "tgt_name": "foobar", 00:16:27.236 "method": "nvmf_create_subsystem", 00:16:27.236 "req_id": 1 00:16:27.236 } 00:16:27.236 Got JSON-RPC error response 00:16:27.236 response: 00:16:27.236 { 00:16:27.236 "code": -32603, 00:16:27.236 "message": "Unable to find target foobar" 00:16:27.236 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:27.236 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:27.236 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27890 00:16:27.236 [2024-05-16 09:29:20.710622] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27890: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:27.236 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:27.236 { 00:16:27.236 "nqn": "nqn.2016-06.io.spdk:cnode27890", 00:16:27.236 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:27.236 "method": "nvmf_create_subsystem", 00:16:27.236 "req_id": 1 00:16:27.236 } 00:16:27.236 Got JSON-RPC error response 00:16:27.236 response: 00:16:27.236 { 00:16:27.236 "code": -32602, 00:16:27.236 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:27.236 }' 00:16:27.236 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:27.236 { 00:16:27.236 "nqn": "nqn.2016-06.io.spdk:cnode27890", 00:16:27.236 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:27.236 "method": "nvmf_create_subsystem", 00:16:27.236 "req_id": 1 00:16:27.236 } 00:16:27.236 Got JSON-RPC error response 00:16:27.236 response: 00:16:27.236 { 00:16:27.236 "code": -32602, 00:16:27.236 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:27.236 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:27.237 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:27.237 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20336 00:16:27.498 [2024-05-16 09:29:20.879175] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20336: invalid model number 'SPDK_Controller' 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:27.498 { 00:16:27.498 "nqn": "nqn.2016-06.io.spdk:cnode20336", 00:16:27.498 "model_number": "SPDK_Controller\u001f", 00:16:27.498 "method": "nvmf_create_subsystem", 00:16:27.498 "req_id": 1 00:16:27.498 } 00:16:27.498 Got JSON-RPC error response 00:16:27.498 response: 00:16:27.498 { 00:16:27.498 "code": -32602, 00:16:27.498 "message": "Invalid MN SPDK_Controller\u001f" 00:16:27.498 }' 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:27.498 { 00:16:27.498 "nqn": "nqn.2016-06.io.spdk:cnode20336", 00:16:27.498 "model_number": "SPDK_Controller\u001f", 00:16:27.498 "method": "nvmf_create_subsystem", 00:16:27.498 "req_id": 1 00:16:27.498 } 00:16:27.498 Got JSON-RPC error response 00:16:27.498 response: 00:16:27.498 { 00:16:27.498 "code": -32602, 00:16:27.498 "message": "Invalid MN SPDK_Controller\u001f" 00:16:27.498 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:16:27.498 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.499 09:29:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:27.499 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 5 == \- ]] 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '5Tgx7jRpsO\#0aXMGLd:' 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '5Tgx7jRpsO\#0aXMGLd:' nqn.2016-06.io.spdk:cnode14770 00:16:27.761 [2024-05-16 09:29:21.216244] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14770: invalid serial number '5Tgx7jRpsO\#0aXMGLd:' 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:27.761 { 00:16:27.761 "nqn": "nqn.2016-06.io.spdk:cnode14770", 00:16:27.761 "serial_number": "5T\u007fgx7jRpsO\\#0aXMGLd:", 00:16:27.761 "method": "nvmf_create_subsystem", 00:16:27.761 "req_id": 1 00:16:27.761 } 00:16:27.761 Got JSON-RPC error response 00:16:27.761 response: 00:16:27.761 { 00:16:27.761 "code": -32602, 00:16:27.761 "message": "Invalid SN 5T\u007fgx7jRpsO\\#0aXMGLd:" 00:16:27.761 }' 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:27.761 { 00:16:27.761 "nqn": "nqn.2016-06.io.spdk:cnode14770", 00:16:27.761 "serial_number": "5T\u007fgx7jRpsO\\#0aXMGLd:", 00:16:27.761 "method": "nvmf_create_subsystem", 00:16:27.761 "req_id": 1 00:16:27.761 } 00:16:27.761 Got JSON-RPC error response 00:16:27.761 response: 00:16:27.761 { 00:16:27.761 "code": -32602, 00:16:27.761 "message": "Invalid SN 5T\u007fgx7jRpsO\\#0aXMGLd:" 00:16:27.761 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:27.761 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:27.762 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:28.024 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ z == \- ]] 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'z&CR]~PSEpGJ`$bb,|dw^5LB5[kebZGhd1v3p;8zs' 00:16:28.025 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'z&CR]~PSEpGJ`$bb,|dw^5LB5[kebZGhd1v3p;8zs' nqn.2016-06.io.spdk:cnode1655 00:16:28.286 [2024-05-16 09:29:21.697791] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1655: invalid model number 'z&CR]~PSEpGJ`$bb,|dw^5LB5[kebZGhd1v3p;8zs' 00:16:28.286 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:28.286 { 00:16:28.286 "nqn": "nqn.2016-06.io.spdk:cnode1655", 00:16:28.286 "model_number": "z&CR]~PSEpGJ`$bb,|dw^5LB5[kebZGhd1v3p;8zs", 00:16:28.286 "method": "nvmf_create_subsystem", 00:16:28.286 "req_id": 1 00:16:28.286 } 00:16:28.286 Got JSON-RPC error response 00:16:28.286 response: 00:16:28.286 { 00:16:28.286 "code": -32602, 00:16:28.286 "message": "Invalid MN z&CR]~PSEpGJ`$bb,|dw^5LB5[kebZGhd1v3p;8zs" 00:16:28.286 }' 00:16:28.286 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:28.286 { 00:16:28.286 "nqn": "nqn.2016-06.io.spdk:cnode1655", 00:16:28.286 "model_number": "z&CR]~PSEpGJ`$bb,|dw^5LB5[kebZGhd1v3p;8zs", 00:16:28.286 "method": "nvmf_create_subsystem", 00:16:28.286 "req_id": 1 00:16:28.286 } 00:16:28.286 Got JSON-RPC error response 00:16:28.286 response: 00:16:28.286 { 00:16:28.286 "code": -32602, 00:16:28.286 "message": "Invalid MN z&CR]~PSEpGJ`$bb,|dw^5LB5[kebZGhd1v3p;8zs" 00:16:28.286 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:28.286 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:28.562 [2024-05-16 09:29:21.870398] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.562 09:29:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:28.562 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:28.562 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:28.562 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:28.562 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:28.562 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:28.825 [2024-05-16 09:29:22.231518] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:28.825 [2024-05-16 09:29:22.231581] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:28.825 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:28.825 { 00:16:28.825 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:28.825 "listen_address": { 00:16:28.825 "trtype": "tcp", 00:16:28.825 "traddr": "", 00:16:28.825 "trsvcid": "4421" 00:16:28.825 }, 00:16:28.825 "method": "nvmf_subsystem_remove_listener", 00:16:28.825 "req_id": 1 00:16:28.825 } 00:16:28.825 Got JSON-RPC error response 00:16:28.825 response: 00:16:28.825 { 00:16:28.825 "code": -32602, 00:16:28.825 "message": "Invalid parameters" 00:16:28.825 }' 00:16:28.825 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:28.825 { 00:16:28.825 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:28.825 "listen_address": { 00:16:28.825 "trtype": "tcp", 00:16:28.825 "traddr": "", 00:16:28.825 "trsvcid": "4421" 00:16:28.825 }, 00:16:28.825 "method": "nvmf_subsystem_remove_listener", 00:16:28.825 "req_id": 1 00:16:28.825 } 00:16:28.825 Got JSON-RPC error response 00:16:28.825 response: 00:16:28.825 { 00:16:28.825 "code": -32602, 00:16:28.825 "message": "Invalid parameters" 00:16:28.825 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:28.825 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31692 -i 0 00:16:29.087 [2024-05-16 09:29:22.408105] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31692: invalid cntlid range [0-65519] 00:16:29.087 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:29.087 { 00:16:29.087 "nqn": "nqn.2016-06.io.spdk:cnode31692", 00:16:29.087 "min_cntlid": 0, 00:16:29.087 "method": "nvmf_create_subsystem", 00:16:29.087 "req_id": 1 00:16:29.087 } 00:16:29.087 Got JSON-RPC error response 00:16:29.087 response: 00:16:29.087 { 00:16:29.087 "code": -32602, 00:16:29.087 "message": "Invalid cntlid range [0-65519]" 00:16:29.087 }' 00:16:29.087 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:29.087 { 00:16:29.087 "nqn": "nqn.2016-06.io.spdk:cnode31692", 00:16:29.087 "min_cntlid": 0, 00:16:29.087 "method": "nvmf_create_subsystem", 00:16:29.087 "req_id": 1 00:16:29.087 } 00:16:29.087 Got JSON-RPC error response 00:16:29.087 response: 00:16:29.087 { 00:16:29.087 "code": -32602, 00:16:29.087 "message": "Invalid cntlid range [0-65519]" 00:16:29.087 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:29.087 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10564 -i 65520 00:16:29.087 [2024-05-16 09:29:22.584670] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10564: invalid cntlid range [65520-65519] 00:16:29.087 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:29.087 { 00:16:29.087 "nqn": "nqn.2016-06.io.spdk:cnode10564", 00:16:29.087 "min_cntlid": 65520, 00:16:29.087 "method": "nvmf_create_subsystem", 00:16:29.087 "req_id": 1 00:16:29.087 } 00:16:29.087 Got JSON-RPC error response 00:16:29.087 response: 00:16:29.087 { 00:16:29.087 "code": -32602, 00:16:29.087 "message": "Invalid cntlid range [65520-65519]" 00:16:29.087 }' 00:16:29.087 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:29.087 { 00:16:29.087 "nqn": "nqn.2016-06.io.spdk:cnode10564", 00:16:29.087 "min_cntlid": 65520, 00:16:29.087 "method": "nvmf_create_subsystem", 00:16:29.087 "req_id": 1 00:16:29.087 } 00:16:29.087 Got JSON-RPC error response 00:16:29.087 response: 00:16:29.087 { 00:16:29.087 "code": -32602, 00:16:29.087 "message": "Invalid cntlid range [65520-65519]" 00:16:29.087 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:29.087 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8870 -I 0 00:16:29.348 [2024-05-16 09:29:22.761290] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8870: invalid cntlid range [1-0] 00:16:29.348 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:29.348 { 00:16:29.348 "nqn": "nqn.2016-06.io.spdk:cnode8870", 00:16:29.348 "max_cntlid": 0, 00:16:29.348 "method": "nvmf_create_subsystem", 00:16:29.348 "req_id": 1 00:16:29.348 } 00:16:29.348 Got JSON-RPC error response 00:16:29.348 response: 00:16:29.348 { 00:16:29.348 "code": -32602, 00:16:29.348 "message": "Invalid cntlid range [1-0]" 00:16:29.348 }' 00:16:29.348 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:29.348 { 00:16:29.348 "nqn": "nqn.2016-06.io.spdk:cnode8870", 00:16:29.348 "max_cntlid": 0, 00:16:29.348 "method": "nvmf_create_subsystem", 00:16:29.348 "req_id": 1 00:16:29.348 } 00:16:29.348 Got JSON-RPC error response 00:16:29.348 response: 00:16:29.348 { 00:16:29.348 "code": -32602, 00:16:29.348 "message": "Invalid cntlid range [1-0]" 00:16:29.348 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:29.348 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25008 -I 65520 00:16:29.618 [2024-05-16 09:29:22.937841] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25008: invalid cntlid range [1-65520] 00:16:29.618 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:29.618 { 00:16:29.618 "nqn": "nqn.2016-06.io.spdk:cnode25008", 00:16:29.618 "max_cntlid": 65520, 00:16:29.618 "method": "nvmf_create_subsystem", 00:16:29.618 "req_id": 1 00:16:29.618 } 00:16:29.618 Got JSON-RPC error response 00:16:29.618 response: 00:16:29.618 { 00:16:29.618 "code": -32602, 00:16:29.618 "message": "Invalid cntlid range [1-65520]" 00:16:29.618 }' 00:16:29.618 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:29.618 { 00:16:29.618 "nqn": "nqn.2016-06.io.spdk:cnode25008", 00:16:29.618 "max_cntlid": 65520, 00:16:29.618 "method": "nvmf_create_subsystem", 00:16:29.618 "req_id": 1 00:16:29.618 } 00:16:29.618 Got JSON-RPC error response 00:16:29.618 response: 00:16:29.618 { 00:16:29.618 "code": -32602, 00:16:29.618 "message": "Invalid cntlid range [1-65520]" 00:16:29.618 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:29.618 09:29:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13953 -i 6 -I 5 00:16:29.618 [2024-05-16 09:29:23.102361] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13953: invalid cntlid range [6-5] 00:16:29.618 09:29:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:29.618 { 00:16:29.618 "nqn": "nqn.2016-06.io.spdk:cnode13953", 00:16:29.618 "min_cntlid": 6, 00:16:29.618 "max_cntlid": 5, 00:16:29.618 "method": "nvmf_create_subsystem", 00:16:29.618 "req_id": 1 00:16:29.618 } 00:16:29.618 Got JSON-RPC error response 00:16:29.618 response: 00:16:29.618 { 00:16:29.618 "code": -32602, 00:16:29.618 "message": "Invalid cntlid range [6-5]" 00:16:29.618 }' 00:16:29.618 09:29:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:29.618 { 00:16:29.618 "nqn": "nqn.2016-06.io.spdk:cnode13953", 00:16:29.618 "min_cntlid": 6, 00:16:29.618 "max_cntlid": 5, 00:16:29.618 "method": "nvmf_create_subsystem", 00:16:29.618 "req_id": 1 00:16:29.618 } 00:16:29.618 Got JSON-RPC error response 00:16:29.618 response: 00:16:29.618 { 00:16:29.618 "code": -32602, 00:16:29.618 "message": "Invalid cntlid range [6-5]" 00:16:29.618 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:29.618 09:29:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:29.881 { 00:16:29.881 "name": "foobar", 00:16:29.881 "method": "nvmf_delete_target", 00:16:29.881 "req_id": 1 00:16:29.881 } 00:16:29.881 Got JSON-RPC error response 00:16:29.881 response: 00:16:29.881 { 00:16:29.881 "code": -32602, 00:16:29.881 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:29.881 }' 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:29.881 { 00:16:29.881 "name": "foobar", 00:16:29.881 "method": "nvmf_delete_target", 00:16:29.881 "req_id": 1 00:16:29.881 } 00:16:29.881 Got JSON-RPC error response 00:16:29.881 response: 00:16:29.881 { 00:16:29.881 "code": -32602, 00:16:29.881 "message": "The specified target doesn't exist, cannot delete it." 00:16:29.881 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:29.881 rmmod nvme_tcp 00:16:29.881 rmmod nvme_fabrics 00:16:29.881 rmmod nvme_keyring 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 205987 ']' 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 205987 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 205987 ']' 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 205987 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 205987 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 205987' 00:16:29.881 killing process with pid 205987 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 205987 00:16:29.881 [2024-05-16 09:29:23.349804] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:29.881 09:29:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 205987 00:16:30.142 09:29:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:30.142 09:29:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:30.142 09:29:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:30.142 09:29:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:30.142 09:29:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:30.142 09:29:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.142 09:29:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.142 09:29:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.059 09:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:32.059 00:16:32.059 real 0m13.231s 00:16:32.059 user 0m19.211s 00:16:32.059 sys 0m6.163s 00:16:32.059 09:29:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:32.059 09:29:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:32.059 ************************************ 00:16:32.059 END TEST nvmf_invalid 00:16:32.059 ************************************ 00:16:32.059 09:29:25 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:16:32.059 09:29:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:32.059 09:29:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:32.059 09:29:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:32.320 ************************************ 00:16:32.320 START TEST nvmf_abort 00:16:32.320 ************************************ 00:16:32.320 09:29:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:16:32.320 * Looking for test storage... 00:16:32.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:32.320 09:29:25 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:32.320 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:16:32.320 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.320 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.320 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.320 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.320 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.320 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.320 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.320 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.320 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.320 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:16:32.321 09:29:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:38.913 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:38.913 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:38.914 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:38.914 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:38.914 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:38.914 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:39.175 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:39.175 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:39.175 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:39.175 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:39.175 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:39.175 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:39.175 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:39.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:16:39.175 00:16:39.175 --- 10.0.0.2 ping statistics --- 00:16:39.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.175 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:16:39.175 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:39.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:16:39.175 00:16:39.175 --- 10.0.0.1 ping statistics --- 00:16:39.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.175 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:16:39.175 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.175 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:16:39.175 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:39.175 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.175 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:39.175 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:39.175 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.175 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:39.175 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:39.436 09:29:32 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:16:39.436 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:39.436 09:29:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:39.436 09:29:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:39.436 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:39.436 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=210986 00:16:39.436 09:29:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 210986 00:16:39.436 09:29:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 210986 ']' 00:16:39.436 09:29:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.436 09:29:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:39.436 09:29:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.436 09:29:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:39.436 09:29:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:39.436 [2024-05-16 09:29:32.786937] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:39.436 [2024-05-16 09:29:32.786995] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.436 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.436 [2024-05-16 09:29:32.870661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:39.436 [2024-05-16 09:29:32.938000] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.436 [2024-05-16 09:29:32.938039] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.436 [2024-05-16 09:29:32.938047] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.436 [2024-05-16 09:29:32.938059] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.436 [2024-05-16 09:29:32.938066] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.436 [2024-05-16 09:29:32.938200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.436 [2024-05-16 09:29:32.938435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.436 [2024-05-16 09:29:32.938435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.006 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:40.006 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:16:40.006 09:29:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:40.006 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:40.006 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:40.268 [2024-05-16 09:29:33.598822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:40.268 Malloc0 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:40.268 Delay0 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:40.268 [2024-05-16 09:29:33.682814] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:40.268 [2024-05-16 09:29:33.683026] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.268 09:29:33 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:16:40.268 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.268 [2024-05-16 09:29:33.805329] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:42.811 Initializing NVMe Controllers 00:16:42.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:42.811 controller IO queue size 128 less than required 00:16:42.811 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:16:42.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:16:42.811 Initialization complete. Launching workers. 00:16:42.811 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36780 00:16:42.811 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36841, failed to submit 62 00:16:42.811 success 36784, unsuccess 57, failed 0 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.811 rmmod nvme_tcp 00:16:42.811 rmmod nvme_fabrics 00:16:42.811 rmmod nvme_keyring 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 210986 ']' 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 210986 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 210986 ']' 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 210986 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 210986 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 210986' 00:16:42.811 killing process with pid 210986 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 210986 00:16:42.811 [2024-05-16 09:29:36.138276] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 210986 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.811 09:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.358 09:29:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:45.358 00:16:45.358 real 0m12.708s 00:16:45.358 user 0m14.148s 00:16:45.358 sys 0m5.702s 00:16:45.358 09:29:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:45.358 09:29:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:45.358 ************************************ 00:16:45.358 END TEST nvmf_abort 00:16:45.358 ************************************ 00:16:45.358 09:29:38 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:16:45.358 09:29:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:45.358 09:29:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:45.358 09:29:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:45.358 ************************************ 00:16:45.358 START TEST nvmf_ns_hotplug_stress 00:16:45.358 ************************************ 00:16:45.358 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:16:45.358 * Looking for test storage... 00:16:45.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:45.358 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.358 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:16:45.359 09:29:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:51.951 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.951 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:51.952 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:51.952 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:51.952 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:51.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:16:51.952 00:16:51.952 --- 10.0.0.2 ping statistics --- 00:16:51.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.952 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:51.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:16:51.952 00:16:51.952 --- 10.0.0.1 ping statistics --- 00:16:51.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.952 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:51.952 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.213 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=215848 00:16:52.213 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 215848 00:16:52.213 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:52.213 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 215848 ']' 00:16:52.213 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.213 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:52.213 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.213 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:52.213 09:29:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:52.213 [2024-05-16 09:29:45.560683] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:16:52.213 [2024-05-16 09:29:45.560730] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.213 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.213 [2024-05-16 09:29:45.642910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:52.213 [2024-05-16 09:29:45.717639] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.213 [2024-05-16 09:29:45.717687] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.213 [2024-05-16 09:29:45.717695] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.213 [2024-05-16 09:29:45.717702] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.213 [2024-05-16 09:29:45.717708] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.213 [2024-05-16 09:29:45.717861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.213 [2024-05-16 09:29:45.718081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.213 [2024-05-16 09:29:45.718085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.783 09:29:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:52.783 09:29:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:16:52.783 09:29:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:52.783 09:29:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:52.783 09:29:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:53.044 09:29:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.044 09:29:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:16:53.044 09:29:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:53.044 [2024-05-16 09:29:46.516218] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.044 09:29:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:53.305 09:29:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.305 [2024-05-16 09:29:46.845410] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:53.305 [2024-05-16 09:29:46.845668] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.565 09:29:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:53.565 09:29:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:16:53.825 Malloc0 00:16:53.826 09:29:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:53.826 Delay0 00:16:54.086 09:29:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:54.086 09:29:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:16:54.348 NULL1 00:16:54.348 09:29:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:54.348 09:29:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=216222 00:16:54.348 09:29:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:16:54.348 09:29:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:16:54.348 09:29:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:54.609 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.609 09:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:54.870 09:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:16:54.870 09:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:16:54.870 true 00:16:55.132 09:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:16:55.132 09:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.132 09:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:55.392 09:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:16:55.392 09:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:16:55.392 true 00:16:55.392 09:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:16:55.392 09:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.653 09:29:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:55.914 09:29:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:16:55.914 09:29:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:16:55.914 true 00:16:55.914 09:29:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:16:55.914 09:29:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:56.175 09:29:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:56.435 09:29:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:16:56.435 09:29:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:16:56.696 true 00:16:56.696 09:29:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:16:56.696 09:29:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:56.696 09:29:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:56.957 09:29:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:16:56.957 09:29:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:16:57.218 true 00:16:57.218 09:29:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:16:57.218 09:29:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:57.218 09:29:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:57.480 09:29:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:16:57.480 09:29:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:16:57.480 true 00:16:57.740 09:29:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:16:57.740 09:29:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:57.740 09:29:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:58.002 09:29:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:16:58.002 09:29:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:16:58.002 true 00:16:58.264 09:29:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:16:58.264 09:29:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:58.264 09:29:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:58.525 09:29:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:16:58.525 09:29:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:16:58.525 true 00:16:58.785 09:29:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:16:58.785 09:29:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:58.786 09:29:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:59.047 09:29:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:16:59.047 09:29:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:16:59.047 true 00:16:59.308 09:29:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:16:59.308 09:29:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:59.308 09:29:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:59.569 09:29:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:16:59.569 09:29:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:16:59.569 true 00:16:59.569 09:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:16:59.569 09:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:59.831 09:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:00.092 09:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:17:00.092 09:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:17:00.092 true 00:17:00.092 09:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:00.092 09:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.360 09:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:00.622 09:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:17:00.622 09:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:17:00.622 true 00:17:00.622 09:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:00.622 09:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.883 09:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:01.142 09:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:17:01.142 09:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:17:01.142 true 00:17:01.142 09:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:01.142 09:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.403 09:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:01.664 09:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:17:01.664 09:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:17:01.664 true 00:17:01.664 09:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:01.664 09:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.925 09:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:02.186 09:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:17:02.186 09:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:17:02.186 true 00:17:02.186 09:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:02.186 09:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:02.446 09:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:02.706 09:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:17:02.706 09:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:17:02.706 true 00:17:02.706 09:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:02.706 09:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:02.966 09:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:03.226 09:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:17:03.226 09:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:17:03.226 true 00:17:03.226 09:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:03.226 09:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:03.487 09:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:03.747 09:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:17:03.747 09:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:17:03.747 true 00:17:03.747 09:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:03.747 09:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:04.007 09:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:04.007 09:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:17:04.007 09:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:17:04.266 true 00:17:04.266 09:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:04.266 09:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:04.525 09:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:04.525 09:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:17:04.525 09:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:17:04.786 true 00:17:04.786 09:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:04.786 09:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:05.047 09:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:05.047 09:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:17:05.047 09:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:17:05.309 true 00:17:05.309 09:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:05.309 09:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:05.569 09:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:05.569 09:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:17:05.569 09:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:17:05.830 true 00:17:05.830 09:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:05.830 09:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:06.091 09:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:06.091 09:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:17:06.091 09:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:17:06.351 true 00:17:06.351 09:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:06.351 09:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:06.613 09:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:06.613 09:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:17:06.613 09:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:17:06.874 true 00:17:06.874 09:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:06.874 09:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.136 09:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:07.136 09:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:17:07.136 09:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:17:07.397 true 00:17:07.397 09:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:07.397 09:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.658 09:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:07.658 09:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:17:07.658 09:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:17:07.919 true 00:17:07.919 09:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:07.919 09:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:08.180 09:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:08.180 09:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:17:08.180 09:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:17:08.442 true 00:17:08.442 09:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:08.442 09:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:08.702 09:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:08.702 09:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:17:08.703 09:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:17:08.964 true 00:17:08.964 09:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:08.964 09:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.224 09:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:09.224 09:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:17:09.224 09:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:17:09.484 true 00:17:09.484 09:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:09.484 09:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.745 09:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:09.745 09:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:17:09.745 09:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:17:10.005 true 00:17:10.005 09:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:10.005 09:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:10.005 09:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:10.266 09:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:17:10.266 09:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:17:10.526 true 00:17:10.526 09:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:10.526 09:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:10.526 09:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:10.787 09:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:17:10.787 09:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:17:11.048 true 00:17:11.048 09:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:11.048 09:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:11.048 09:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:11.309 09:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:17:11.309 09:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:17:11.569 true 00:17:11.569 09:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:11.569 09:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:11.569 09:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:11.830 09:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:17:11.830 09:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:17:12.090 true 00:17:12.090 09:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:12.090 09:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:12.090 09:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:12.351 09:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:17:12.351 09:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:17:12.611 true 00:17:12.612 09:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:12.612 09:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:12.612 09:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:12.872 09:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:17:12.872 09:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:17:13.131 true 00:17:13.131 09:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:13.131 09:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:13.131 09:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:13.391 09:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:17:13.391 09:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:17:13.651 true 00:17:13.651 09:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:13.651 09:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:13.651 09:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:13.911 09:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:17:13.911 09:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:17:14.172 true 00:17:14.172 09:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:14.172 09:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:14.172 09:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:14.433 09:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:17:14.433 09:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:17:14.693 true 00:17:14.693 09:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:14.694 09:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:14.694 09:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:14.954 09:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:17:14.954 09:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:17:15.223 true 00:17:15.223 09:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:15.223 09:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:15.224 09:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:15.485 09:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:17:15.485 09:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:17:15.485 true 00:17:15.745 09:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:15.745 09:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:15.745 09:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:16.004 09:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:17:16.004 09:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:17:16.004 true 00:17:16.004 09:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:16.004 09:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:16.263 09:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:16.524 09:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:17:16.524 09:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:17:16.524 true 00:17:16.524 09:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:16.524 09:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:16.784 09:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:17.044 09:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:17:17.044 09:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:17:17.044 true 00:17:17.044 09:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:17.044 09:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:17.303 09:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:17.562 09:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:17:17.562 09:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:17:17.562 true 00:17:17.562 09:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:17.562 09:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:18.005 09:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:18.005 09:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:17:18.005 09:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:17:18.005 true 00:17:18.005 09:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:18.265 09:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:18.265 09:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:18.526 09:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:17:18.526 09:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:17:18.526 true 00:17:18.786 09:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:18.786 09:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:18.786 09:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:19.047 09:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:17:19.047 09:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:17:19.047 true 00:17:19.047 09:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:19.047 09:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.307 09:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:19.567 09:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:17:19.567 09:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:17:19.567 true 00:17:19.567 09:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:19.567 09:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.829 09:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:20.090 09:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:17:20.090 09:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:17:20.090 true 00:17:20.090 09:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:20.090 09:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:20.352 09:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:20.613 09:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:17:20.613 09:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:17:20.613 true 00:17:20.613 09:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:20.613 09:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:20.874 09:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:21.135 09:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:17:21.135 09:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:17:21.135 true 00:17:21.135 09:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:21.135 09:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:21.397 09:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:21.658 09:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:17:21.658 09:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:17:21.658 true 00:17:21.658 09:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:21.658 09:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:21.918 09:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:22.179 09:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:17:22.179 09:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:17:22.179 true 00:17:22.179 09:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:22.179 09:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:22.439 09:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:22.439 09:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:17:22.439 09:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:17:22.700 true 00:17:22.700 09:30:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:22.700 09:30:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:22.960 09:30:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:22.960 09:30:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:17:22.960 09:30:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:17:23.221 true 00:17:23.221 09:30:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:23.221 09:30:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:23.480 09:30:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:23.481 09:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:17:23.481 09:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:17:23.741 true 00:17:23.741 09:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:23.741 09:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:24.003 09:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:24.003 09:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:17:24.003 09:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:17:24.264 true 00:17:24.264 09:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:24.264 09:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:24.526 09:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:24.526 09:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:17:24.526 09:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:17:24.787 true 00:17:24.787 09:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:24.787 09:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:24.787 Initializing NVMe Controllers 00:17:24.787 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:24.787 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:17:24.787 Controller IO queue size 128, less than required. 00:17:24.787 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:24.787 WARNING: Some requested NVMe devices were skipped 00:17:24.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:24.787 Initialization complete. Launching workers. 00:17:24.787 ======================================================== 00:17:24.787 Latency(us) 00:17:24.787 Device Information : IOPS MiB/s Average min max 00:17:24.787 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31271.55 15.27 4093.06 1423.88 10932.93 00:17:24.787 ======================================================== 00:17:24.787 Total : 31271.55 15.27 4093.06 1423.88 10932.93 00:17:24.787 00:17:25.048 09:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:25.048 09:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:17:25.048 09:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:17:25.308 true 00:17:25.308 09:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 216222 00:17:25.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (216222) - No such process 00:17:25.308 09:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 216222 00:17:25.308 09:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:25.568 09:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:25.568 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:17:25.568 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:17:25.569 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:17:25.569 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:25.569 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:17:25.829 null0 00:17:25.829 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:25.829 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:25.829 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:17:25.829 null1 00:17:26.089 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:26.089 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:26.089 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:17:26.089 null2 00:17:26.089 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:26.089 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:26.089 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:17:26.350 null3 00:17:26.350 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:26.350 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:26.350 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:17:26.350 null4 00:17:26.611 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:26.611 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:26.611 09:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:17:26.611 null5 00:17:26.611 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:26.611 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:26.611 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:17:26.871 null6 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:17:26.871 null7 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:26.871 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 223645 223646 223648 223650 223653 223654 223656 223658 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:26.872 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:27.132 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:27.132 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:27.132 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:27.132 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:27.132 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:27.132 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:27.132 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:27.132 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:27.392 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:27.653 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:27.653 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:27.653 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:27.653 09:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:27.653 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:27.914 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:27.914 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:27.914 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:27.915 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:27.915 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:27.915 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:27.915 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:27.915 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:27.915 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:27.915 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:27.915 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:27.915 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:27.915 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:27.915 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:27.915 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:27.915 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:27.915 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:28.176 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:28.437 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:28.438 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:28.438 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.438 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.438 09:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.698 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:28.959 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:29.221 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:29.482 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.482 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.482 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:29.482 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.482 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.482 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:29.482 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:29.482 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.482 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.482 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:29.482 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:29.482 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.482 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.482 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:29.483 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.483 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.483 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.483 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:29.483 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.483 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:29.483 09:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:29.483 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:29.483 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.483 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.483 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:29.744 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.005 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.266 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.528 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:30.528 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.528 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.528 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.528 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.528 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:30.528 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:30.528 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.528 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.528 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.528 09:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.528 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.528 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.528 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.528 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.788 rmmod nvme_tcp 00:17:30.788 rmmod nvme_fabrics 00:17:30.788 rmmod nvme_keyring 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 215848 ']' 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 215848 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 215848 ']' 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 215848 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 215848 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 215848' 00:17:30.788 killing process with pid 215848 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 215848 00:17:30.788 [2024-05-16 09:30:24.261382] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:30.788 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 215848 00:17:31.048 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:31.048 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:31.048 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:31.048 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:31.048 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:31.048 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.048 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.048 09:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.960 09:30:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:32.960 00:17:32.960 real 0m48.010s 00:17:32.960 user 3m18.057s 00:17:32.960 sys 0m16.456s 00:17:32.960 09:30:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:32.961 09:30:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.961 ************************************ 00:17:32.961 END TEST nvmf_ns_hotplug_stress 00:17:32.961 ************************************ 00:17:32.961 09:30:26 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:32.961 09:30:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:32.961 09:30:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:32.961 09:30:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:33.223 ************************************ 00:17:33.223 START TEST nvmf_connect_stress 00:17:33.223 ************************************ 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:33.223 * Looking for test storage... 00:17:33.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:17:33.223 09:30:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:39.811 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:39.811 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:39.811 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:39.812 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:39.812 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:39.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:17:39.812 00:17:39.812 --- 10.0.0.2 ping statistics --- 00:17:39.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.812 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:39.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:17:39.812 00:17:39.812 --- 10.0.0.1 ping statistics --- 00:17:39.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.812 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:39.812 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:40.073 09:30:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:40.073 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:40.073 09:30:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:40.073 09:30:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.073 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=228484 00:17:40.073 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 228484 00:17:40.073 09:30:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 228484 ']' 00:17:40.073 09:30:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.073 09:30:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:40.073 09:30:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.073 09:30:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:40.073 09:30:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.073 09:30:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:40.073 [2024-05-16 09:30:33.452652] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:17:40.073 [2024-05-16 09:30:33.452713] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.073 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.073 [2024-05-16 09:30:33.540567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:40.344 [2024-05-16 09:30:33.634422] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.344 [2024-05-16 09:30:33.634479] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.344 [2024-05-16 09:30:33.634487] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.344 [2024-05-16 09:30:33.634494] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.345 [2024-05-16 09:30:33.634500] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.345 [2024-05-16 09:30:33.634628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.345 [2024-05-16 09:30:33.634792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.345 [2024-05-16 09:30:33.634793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.916 [2024-05-16 09:30:34.285003] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.916 [2024-05-16 09:30:34.309199] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:40.916 [2024-05-16 09:30:34.322183] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.916 NULL1 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=228830 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.916 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.917 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.917 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.488 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.488 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:41.488 09:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.488 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.488 09:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.752 09:30:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.752 09:30:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:41.753 09:30:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.753 09:30:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.753 09:30:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.013 09:30:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.013 09:30:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:42.013 09:30:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.013 09:30:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.014 09:30:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.275 09:30:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.275 09:30:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:42.275 09:30:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.275 09:30:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.275 09:30:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.535 09:30:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.535 09:30:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:42.535 09:30:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.535 09:30:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.535 09:30:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.107 09:30:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.107 09:30:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:43.107 09:30:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.107 09:30:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.107 09:30:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.368 09:30:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.368 09:30:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:43.368 09:30:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.368 09:30:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.368 09:30:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.629 09:30:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.629 09:30:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:43.629 09:30:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.629 09:30:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.629 09:30:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.889 09:30:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.889 09:30:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:43.889 09:30:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.890 09:30:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.890 09:30:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.150 09:30:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.150 09:30:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:44.150 09:30:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.150 09:30:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.150 09:30:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.720 09:30:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.720 09:30:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:44.720 09:30:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.720 09:30:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.720 09:30:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.981 09:30:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.981 09:30:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:44.981 09:30:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.981 09:30:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.981 09:30:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.242 09:30:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.242 09:30:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:45.242 09:30:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.242 09:30:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.242 09:30:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.502 09:30:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.503 09:30:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:45.503 09:30:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.503 09:30:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.503 09:30:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.763 09:30:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.763 09:30:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:45.763 09:30:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.763 09:30:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.763 09:30:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.335 09:30:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.335 09:30:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:46.335 09:30:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.335 09:30:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.335 09:30:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.595 09:30:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.595 09:30:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:46.595 09:30:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.595 09:30:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.595 09:30:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.856 09:30:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.856 09:30:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:46.856 09:30:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.856 09:30:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.856 09:30:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.116 09:30:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.116 09:30:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:47.116 09:30:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.116 09:30:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.116 09:30:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.687 09:30:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.687 09:30:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:47.687 09:30:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.687 09:30:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.687 09:30:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.948 09:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.948 09:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:47.948 09:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.948 09:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.948 09:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.208 09:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.208 09:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:48.208 09:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.208 09:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.208 09:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.469 09:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.469 09:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:48.469 09:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.469 09:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.469 09:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.729 09:30:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.729 09:30:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:48.729 09:30:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.729 09:30:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.729 09:30:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.301 09:30:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.301 09:30:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:49.301 09:30:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.301 09:30:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.301 09:30:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.560 09:30:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.560 09:30:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:49.560 09:30:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.560 09:30:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.560 09:30:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.820 09:30:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.820 09:30:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:49.820 09:30:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.820 09:30:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.820 09:30:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.080 09:30:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.080 09:30:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:50.080 09:30:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.080 09:30:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.080 09:30:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.341 09:30:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.341 09:30:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:50.341 09:30:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.341 09:30:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.341 09:30:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.913 09:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.913 09:30:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:50.913 09:30:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.913 09:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.913 09:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.913 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 228830 00:17:51.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (228830) - No such process 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 228830 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:51.174 rmmod nvme_tcp 00:17:51.174 rmmod nvme_fabrics 00:17:51.174 rmmod nvme_keyring 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 228484 ']' 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 228484 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 228484 ']' 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 228484 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 228484 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 228484' 00:17:51.174 killing process with pid 228484 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 228484 00:17:51.174 [2024-05-16 09:30:44.654834] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:51.174 09:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 228484 00:17:51.435 09:30:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:51.435 09:30:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:51.435 09:30:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:51.435 09:30:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:51.435 09:30:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:51.435 09:30:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.435 09:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.435 09:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.350 09:30:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:53.350 00:17:53.350 real 0m20.310s 00:17:53.350 user 0m43.504s 00:17:53.350 sys 0m6.997s 00:17:53.350 09:30:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:53.350 09:30:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.350 ************************************ 00:17:53.350 END TEST nvmf_connect_stress 00:17:53.350 ************************************ 00:17:53.350 09:30:46 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:53.350 09:30:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:53.350 09:30:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:53.350 09:30:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:53.611 ************************************ 00:17:53.611 START TEST nvmf_fused_ordering 00:17:53.611 ************************************ 00:17:53.611 09:30:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:53.611 * Looking for test storage... 00:17:53.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:53.611 09:30:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.611 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:17:53.612 09:30:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:00.199 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:00.199 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:00.199 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:00.199 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:00.199 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:00.200 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.200 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.200 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.200 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:00.200 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.200 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.200 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:00.200 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.200 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.200 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:00.200 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:00.200 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.200 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.459 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.459 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.459 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:00.459 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:00.459 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:00.459 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:00.459 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:00.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.733 ms 00:18:00.459 00:18:00.459 --- 10.0.0.2 ping statistics --- 00:18:00.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.459 rtt min/avg/max/mdev = 0.733/0.733/0.733/0.000 ms 00:18:00.459 09:30:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:00.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:18:00.459 00:18:00.459 --- 10.0.0.1 ping statistics --- 00:18:00.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.459 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:18:00.459 09:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.459 09:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:18:00.459 09:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:00.459 09:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.459 09:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:00.459 09:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:00.459 09:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.459 09:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:00.459 09:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:00.719 09:30:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:00.719 09:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:00.719 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:00.719 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:00.719 09:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=234860 00:18:00.719 09:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 234860 00:18:00.719 09:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:00.719 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 234860 ']' 00:18:00.719 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.719 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:00.719 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.719 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:00.719 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:00.719 [2024-05-16 09:30:54.108477] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:18:00.719 [2024-05-16 09:30:54.108539] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.719 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.719 [2024-05-16 09:30:54.194612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.979 [2024-05-16 09:30:54.288301] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.979 [2024-05-16 09:30:54.288360] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.979 [2024-05-16 09:30:54.288368] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.979 [2024-05-16 09:30:54.288375] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.979 [2024-05-16 09:30:54.288381] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.979 [2024-05-16 09:30:54.288420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:01.552 [2024-05-16 09:30:54.933003] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:01.552 [2024-05-16 09:30:54.948974] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:01.552 [2024-05-16 09:30:54.949267] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:01.552 NULL1 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.552 09:30:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:01.552 [2024-05-16 09:30:55.006798] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:18:01.552 [2024-05-16 09:30:55.006863] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235204 ] 00:18:01.552 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.493 Attached to nqn.2016-06.io.spdk:cnode1 00:18:02.493 Namespace ID: 1 size: 1GB 00:18:02.493 fused_ordering(0) 00:18:02.493 fused_ordering(1) 00:18:02.493 fused_ordering(2) 00:18:02.493 fused_ordering(3) 00:18:02.493 fused_ordering(4) 00:18:02.493 fused_ordering(5) 00:18:02.493 fused_ordering(6) 00:18:02.493 fused_ordering(7) 00:18:02.493 fused_ordering(8) 00:18:02.493 fused_ordering(9) 00:18:02.493 fused_ordering(10) 00:18:02.493 fused_ordering(11) 00:18:02.493 fused_ordering(12) 00:18:02.493 fused_ordering(13) 00:18:02.493 fused_ordering(14) 00:18:02.493 fused_ordering(15) 00:18:02.493 fused_ordering(16) 00:18:02.493 fused_ordering(17) 00:18:02.493 fused_ordering(18) 00:18:02.493 fused_ordering(19) 00:18:02.493 fused_ordering(20) 00:18:02.493 fused_ordering(21) 00:18:02.493 fused_ordering(22) 00:18:02.493 fused_ordering(23) 00:18:02.493 fused_ordering(24) 00:18:02.493 fused_ordering(25) 00:18:02.493 fused_ordering(26) 00:18:02.493 fused_ordering(27) 00:18:02.493 fused_ordering(28) 00:18:02.493 fused_ordering(29) 00:18:02.493 fused_ordering(30) 00:18:02.493 fused_ordering(31) 00:18:02.493 fused_ordering(32) 00:18:02.493 fused_ordering(33) 00:18:02.493 fused_ordering(34) 00:18:02.493 fused_ordering(35) 00:18:02.493 fused_ordering(36) 00:18:02.493 fused_ordering(37) 00:18:02.493 fused_ordering(38) 00:18:02.493 fused_ordering(39) 00:18:02.493 fused_ordering(40) 00:18:02.493 fused_ordering(41) 00:18:02.493 fused_ordering(42) 00:18:02.493 fused_ordering(43) 00:18:02.493 fused_ordering(44) 00:18:02.493 fused_ordering(45) 00:18:02.493 fused_ordering(46) 00:18:02.493 fused_ordering(47) 00:18:02.493 fused_ordering(48) 00:18:02.493 fused_ordering(49) 00:18:02.493 fused_ordering(50) 00:18:02.493 fused_ordering(51) 00:18:02.493 fused_ordering(52) 00:18:02.493 fused_ordering(53) 00:18:02.493 fused_ordering(54) 00:18:02.493 fused_ordering(55) 00:18:02.493 fused_ordering(56) 00:18:02.493 fused_ordering(57) 00:18:02.493 fused_ordering(58) 00:18:02.493 fused_ordering(59) 00:18:02.493 fused_ordering(60) 00:18:02.493 fused_ordering(61) 00:18:02.493 fused_ordering(62) 00:18:02.493 fused_ordering(63) 00:18:02.493 fused_ordering(64) 00:18:02.493 fused_ordering(65) 00:18:02.493 fused_ordering(66) 00:18:02.493 fused_ordering(67) 00:18:02.493 fused_ordering(68) 00:18:02.493 fused_ordering(69) 00:18:02.493 fused_ordering(70) 00:18:02.493 fused_ordering(71) 00:18:02.493 fused_ordering(72) 00:18:02.493 fused_ordering(73) 00:18:02.493 fused_ordering(74) 00:18:02.493 fused_ordering(75) 00:18:02.493 fused_ordering(76) 00:18:02.493 fused_ordering(77) 00:18:02.493 fused_ordering(78) 00:18:02.493 fused_ordering(79) 00:18:02.493 fused_ordering(80) 00:18:02.493 fused_ordering(81) 00:18:02.493 fused_ordering(82) 00:18:02.493 fused_ordering(83) 00:18:02.493 fused_ordering(84) 00:18:02.493 fused_ordering(85) 00:18:02.493 fused_ordering(86) 00:18:02.493 fused_ordering(87) 00:18:02.493 fused_ordering(88) 00:18:02.493 fused_ordering(89) 00:18:02.493 fused_ordering(90) 00:18:02.493 fused_ordering(91) 00:18:02.493 fused_ordering(92) 00:18:02.493 fused_ordering(93) 00:18:02.493 fused_ordering(94) 00:18:02.493 fused_ordering(95) 00:18:02.493 fused_ordering(96) 00:18:02.493 fused_ordering(97) 00:18:02.493 fused_ordering(98) 00:18:02.493 fused_ordering(99) 00:18:02.493 fused_ordering(100) 00:18:02.493 fused_ordering(101) 00:18:02.493 fused_ordering(102) 00:18:02.493 fused_ordering(103) 00:18:02.493 fused_ordering(104) 00:18:02.493 fused_ordering(105) 00:18:02.493 fused_ordering(106) 00:18:02.493 fused_ordering(107) 00:18:02.493 fused_ordering(108) 00:18:02.493 fused_ordering(109) 00:18:02.493 fused_ordering(110) 00:18:02.493 fused_ordering(111) 00:18:02.493 fused_ordering(112) 00:18:02.493 fused_ordering(113) 00:18:02.493 fused_ordering(114) 00:18:02.493 fused_ordering(115) 00:18:02.493 fused_ordering(116) 00:18:02.493 fused_ordering(117) 00:18:02.493 fused_ordering(118) 00:18:02.493 fused_ordering(119) 00:18:02.493 fused_ordering(120) 00:18:02.493 fused_ordering(121) 00:18:02.493 fused_ordering(122) 00:18:02.493 fused_ordering(123) 00:18:02.493 fused_ordering(124) 00:18:02.493 fused_ordering(125) 00:18:02.493 fused_ordering(126) 00:18:02.493 fused_ordering(127) 00:18:02.493 fused_ordering(128) 00:18:02.493 fused_ordering(129) 00:18:02.493 fused_ordering(130) 00:18:02.493 fused_ordering(131) 00:18:02.493 fused_ordering(132) 00:18:02.493 fused_ordering(133) 00:18:02.493 fused_ordering(134) 00:18:02.493 fused_ordering(135) 00:18:02.493 fused_ordering(136) 00:18:02.493 fused_ordering(137) 00:18:02.493 fused_ordering(138) 00:18:02.493 fused_ordering(139) 00:18:02.493 fused_ordering(140) 00:18:02.493 fused_ordering(141) 00:18:02.493 fused_ordering(142) 00:18:02.493 fused_ordering(143) 00:18:02.493 fused_ordering(144) 00:18:02.493 fused_ordering(145) 00:18:02.493 fused_ordering(146) 00:18:02.493 fused_ordering(147) 00:18:02.493 fused_ordering(148) 00:18:02.493 fused_ordering(149) 00:18:02.493 fused_ordering(150) 00:18:02.493 fused_ordering(151) 00:18:02.494 fused_ordering(152) 00:18:02.494 fused_ordering(153) 00:18:02.494 fused_ordering(154) 00:18:02.494 fused_ordering(155) 00:18:02.494 fused_ordering(156) 00:18:02.494 fused_ordering(157) 00:18:02.494 fused_ordering(158) 00:18:02.494 fused_ordering(159) 00:18:02.494 fused_ordering(160) 00:18:02.494 fused_ordering(161) 00:18:02.494 fused_ordering(162) 00:18:02.494 fused_ordering(163) 00:18:02.494 fused_ordering(164) 00:18:02.494 fused_ordering(165) 00:18:02.494 fused_ordering(166) 00:18:02.494 fused_ordering(167) 00:18:02.494 fused_ordering(168) 00:18:02.494 fused_ordering(169) 00:18:02.494 fused_ordering(170) 00:18:02.494 fused_ordering(171) 00:18:02.494 fused_ordering(172) 00:18:02.494 fused_ordering(173) 00:18:02.494 fused_ordering(174) 00:18:02.494 fused_ordering(175) 00:18:02.494 fused_ordering(176) 00:18:02.494 fused_ordering(177) 00:18:02.494 fused_ordering(178) 00:18:02.494 fused_ordering(179) 00:18:02.494 fused_ordering(180) 00:18:02.494 fused_ordering(181) 00:18:02.494 fused_ordering(182) 00:18:02.494 fused_ordering(183) 00:18:02.494 fused_ordering(184) 00:18:02.494 fused_ordering(185) 00:18:02.494 fused_ordering(186) 00:18:02.494 fused_ordering(187) 00:18:02.494 fused_ordering(188) 00:18:02.494 fused_ordering(189) 00:18:02.494 fused_ordering(190) 00:18:02.494 fused_ordering(191) 00:18:02.494 fused_ordering(192) 00:18:02.494 fused_ordering(193) 00:18:02.494 fused_ordering(194) 00:18:02.494 fused_ordering(195) 00:18:02.494 fused_ordering(196) 00:18:02.494 fused_ordering(197) 00:18:02.494 fused_ordering(198) 00:18:02.494 fused_ordering(199) 00:18:02.494 fused_ordering(200) 00:18:02.494 fused_ordering(201) 00:18:02.494 fused_ordering(202) 00:18:02.494 fused_ordering(203) 00:18:02.494 fused_ordering(204) 00:18:02.494 fused_ordering(205) 00:18:02.755 fused_ordering(206) 00:18:02.755 fused_ordering(207) 00:18:02.755 fused_ordering(208) 00:18:02.755 fused_ordering(209) 00:18:02.755 fused_ordering(210) 00:18:02.755 fused_ordering(211) 00:18:02.755 fused_ordering(212) 00:18:02.755 fused_ordering(213) 00:18:02.755 fused_ordering(214) 00:18:02.755 fused_ordering(215) 00:18:02.755 fused_ordering(216) 00:18:02.755 fused_ordering(217) 00:18:02.755 fused_ordering(218) 00:18:02.755 fused_ordering(219) 00:18:02.755 fused_ordering(220) 00:18:02.755 fused_ordering(221) 00:18:02.755 fused_ordering(222) 00:18:02.755 fused_ordering(223) 00:18:02.755 fused_ordering(224) 00:18:02.755 fused_ordering(225) 00:18:02.755 fused_ordering(226) 00:18:02.755 fused_ordering(227) 00:18:02.755 fused_ordering(228) 00:18:02.755 fused_ordering(229) 00:18:02.755 fused_ordering(230) 00:18:02.755 fused_ordering(231) 00:18:02.755 fused_ordering(232) 00:18:02.755 fused_ordering(233) 00:18:02.755 fused_ordering(234) 00:18:02.755 fused_ordering(235) 00:18:02.755 fused_ordering(236) 00:18:02.755 fused_ordering(237) 00:18:02.755 fused_ordering(238) 00:18:02.755 fused_ordering(239) 00:18:02.755 fused_ordering(240) 00:18:02.755 fused_ordering(241) 00:18:02.755 fused_ordering(242) 00:18:02.755 fused_ordering(243) 00:18:02.755 fused_ordering(244) 00:18:02.755 fused_ordering(245) 00:18:02.755 fused_ordering(246) 00:18:02.755 fused_ordering(247) 00:18:02.755 fused_ordering(248) 00:18:02.755 fused_ordering(249) 00:18:02.755 fused_ordering(250) 00:18:02.755 fused_ordering(251) 00:18:02.755 fused_ordering(252) 00:18:02.755 fused_ordering(253) 00:18:02.755 fused_ordering(254) 00:18:02.755 fused_ordering(255) 00:18:02.755 fused_ordering(256) 00:18:02.755 fused_ordering(257) 00:18:02.755 fused_ordering(258) 00:18:02.755 fused_ordering(259) 00:18:02.755 fused_ordering(260) 00:18:02.755 fused_ordering(261) 00:18:02.755 fused_ordering(262) 00:18:02.755 fused_ordering(263) 00:18:02.755 fused_ordering(264) 00:18:02.755 fused_ordering(265) 00:18:02.755 fused_ordering(266) 00:18:02.755 fused_ordering(267) 00:18:02.755 fused_ordering(268) 00:18:02.755 fused_ordering(269) 00:18:02.755 fused_ordering(270) 00:18:02.755 fused_ordering(271) 00:18:02.755 fused_ordering(272) 00:18:02.755 fused_ordering(273) 00:18:02.755 fused_ordering(274) 00:18:02.755 fused_ordering(275) 00:18:02.755 fused_ordering(276) 00:18:02.755 fused_ordering(277) 00:18:02.755 fused_ordering(278) 00:18:02.755 fused_ordering(279) 00:18:02.755 fused_ordering(280) 00:18:02.755 fused_ordering(281) 00:18:02.755 fused_ordering(282) 00:18:02.755 fused_ordering(283) 00:18:02.755 fused_ordering(284) 00:18:02.755 fused_ordering(285) 00:18:02.755 fused_ordering(286) 00:18:02.755 fused_ordering(287) 00:18:02.755 fused_ordering(288) 00:18:02.755 fused_ordering(289) 00:18:02.755 fused_ordering(290) 00:18:02.755 fused_ordering(291) 00:18:02.755 fused_ordering(292) 00:18:02.755 fused_ordering(293) 00:18:02.755 fused_ordering(294) 00:18:02.755 fused_ordering(295) 00:18:02.755 fused_ordering(296) 00:18:02.755 fused_ordering(297) 00:18:02.755 fused_ordering(298) 00:18:02.755 fused_ordering(299) 00:18:02.755 fused_ordering(300) 00:18:02.755 fused_ordering(301) 00:18:02.755 fused_ordering(302) 00:18:02.755 fused_ordering(303) 00:18:02.755 fused_ordering(304) 00:18:02.755 fused_ordering(305) 00:18:02.755 fused_ordering(306) 00:18:02.755 fused_ordering(307) 00:18:02.755 fused_ordering(308) 00:18:02.755 fused_ordering(309) 00:18:02.755 fused_ordering(310) 00:18:02.755 fused_ordering(311) 00:18:02.755 fused_ordering(312) 00:18:02.756 fused_ordering(313) 00:18:02.756 fused_ordering(314) 00:18:02.756 fused_ordering(315) 00:18:02.756 fused_ordering(316) 00:18:02.756 fused_ordering(317) 00:18:02.756 fused_ordering(318) 00:18:02.756 fused_ordering(319) 00:18:02.756 fused_ordering(320) 00:18:02.756 fused_ordering(321) 00:18:02.756 fused_ordering(322) 00:18:02.756 fused_ordering(323) 00:18:02.756 fused_ordering(324) 00:18:02.756 fused_ordering(325) 00:18:02.756 fused_ordering(326) 00:18:02.756 fused_ordering(327) 00:18:02.756 fused_ordering(328) 00:18:02.756 fused_ordering(329) 00:18:02.756 fused_ordering(330) 00:18:02.756 fused_ordering(331) 00:18:02.756 fused_ordering(332) 00:18:02.756 fused_ordering(333) 00:18:02.756 fused_ordering(334) 00:18:02.756 fused_ordering(335) 00:18:02.756 fused_ordering(336) 00:18:02.756 fused_ordering(337) 00:18:02.756 fused_ordering(338) 00:18:02.756 fused_ordering(339) 00:18:02.756 fused_ordering(340) 00:18:02.756 fused_ordering(341) 00:18:02.756 fused_ordering(342) 00:18:02.756 fused_ordering(343) 00:18:02.756 fused_ordering(344) 00:18:02.756 fused_ordering(345) 00:18:02.756 fused_ordering(346) 00:18:02.756 fused_ordering(347) 00:18:02.756 fused_ordering(348) 00:18:02.756 fused_ordering(349) 00:18:02.756 fused_ordering(350) 00:18:02.756 fused_ordering(351) 00:18:02.756 fused_ordering(352) 00:18:02.756 fused_ordering(353) 00:18:02.756 fused_ordering(354) 00:18:02.756 fused_ordering(355) 00:18:02.756 fused_ordering(356) 00:18:02.756 fused_ordering(357) 00:18:02.756 fused_ordering(358) 00:18:02.756 fused_ordering(359) 00:18:02.756 fused_ordering(360) 00:18:02.756 fused_ordering(361) 00:18:02.756 fused_ordering(362) 00:18:02.756 fused_ordering(363) 00:18:02.756 fused_ordering(364) 00:18:02.756 fused_ordering(365) 00:18:02.756 fused_ordering(366) 00:18:02.756 fused_ordering(367) 00:18:02.756 fused_ordering(368) 00:18:02.756 fused_ordering(369) 00:18:02.756 fused_ordering(370) 00:18:02.756 fused_ordering(371) 00:18:02.756 fused_ordering(372) 00:18:02.756 fused_ordering(373) 00:18:02.756 fused_ordering(374) 00:18:02.756 fused_ordering(375) 00:18:02.756 fused_ordering(376) 00:18:02.756 fused_ordering(377) 00:18:02.756 fused_ordering(378) 00:18:02.756 fused_ordering(379) 00:18:02.756 fused_ordering(380) 00:18:02.756 fused_ordering(381) 00:18:02.756 fused_ordering(382) 00:18:02.756 fused_ordering(383) 00:18:02.756 fused_ordering(384) 00:18:02.756 fused_ordering(385) 00:18:02.756 fused_ordering(386) 00:18:02.756 fused_ordering(387) 00:18:02.756 fused_ordering(388) 00:18:02.756 fused_ordering(389) 00:18:02.756 fused_ordering(390) 00:18:02.756 fused_ordering(391) 00:18:02.756 fused_ordering(392) 00:18:02.756 fused_ordering(393) 00:18:02.756 fused_ordering(394) 00:18:02.756 fused_ordering(395) 00:18:02.756 fused_ordering(396) 00:18:02.756 fused_ordering(397) 00:18:02.756 fused_ordering(398) 00:18:02.756 fused_ordering(399) 00:18:02.756 fused_ordering(400) 00:18:02.756 fused_ordering(401) 00:18:02.756 fused_ordering(402) 00:18:02.756 fused_ordering(403) 00:18:02.756 fused_ordering(404) 00:18:02.756 fused_ordering(405) 00:18:02.756 fused_ordering(406) 00:18:02.756 fused_ordering(407) 00:18:02.756 fused_ordering(408) 00:18:02.756 fused_ordering(409) 00:18:02.756 fused_ordering(410) 00:18:03.327 fused_ordering(411) 00:18:03.327 fused_ordering(412) 00:18:03.327 fused_ordering(413) 00:18:03.327 fused_ordering(414) 00:18:03.327 fused_ordering(415) 00:18:03.327 fused_ordering(416) 00:18:03.327 fused_ordering(417) 00:18:03.327 fused_ordering(418) 00:18:03.327 fused_ordering(419) 00:18:03.327 fused_ordering(420) 00:18:03.327 fused_ordering(421) 00:18:03.327 fused_ordering(422) 00:18:03.327 fused_ordering(423) 00:18:03.327 fused_ordering(424) 00:18:03.327 fused_ordering(425) 00:18:03.327 fused_ordering(426) 00:18:03.327 fused_ordering(427) 00:18:03.327 fused_ordering(428) 00:18:03.327 fused_ordering(429) 00:18:03.327 fused_ordering(430) 00:18:03.327 fused_ordering(431) 00:18:03.327 fused_ordering(432) 00:18:03.327 fused_ordering(433) 00:18:03.327 fused_ordering(434) 00:18:03.327 fused_ordering(435) 00:18:03.327 fused_ordering(436) 00:18:03.327 fused_ordering(437) 00:18:03.327 fused_ordering(438) 00:18:03.327 fused_ordering(439) 00:18:03.327 fused_ordering(440) 00:18:03.327 fused_ordering(441) 00:18:03.327 fused_ordering(442) 00:18:03.327 fused_ordering(443) 00:18:03.327 fused_ordering(444) 00:18:03.327 fused_ordering(445) 00:18:03.327 fused_ordering(446) 00:18:03.327 fused_ordering(447) 00:18:03.327 fused_ordering(448) 00:18:03.327 fused_ordering(449) 00:18:03.327 fused_ordering(450) 00:18:03.327 fused_ordering(451) 00:18:03.327 fused_ordering(452) 00:18:03.327 fused_ordering(453) 00:18:03.327 fused_ordering(454) 00:18:03.327 fused_ordering(455) 00:18:03.327 fused_ordering(456) 00:18:03.327 fused_ordering(457) 00:18:03.327 fused_ordering(458) 00:18:03.327 fused_ordering(459) 00:18:03.327 fused_ordering(460) 00:18:03.327 fused_ordering(461) 00:18:03.327 fused_ordering(462) 00:18:03.327 fused_ordering(463) 00:18:03.327 fused_ordering(464) 00:18:03.327 fused_ordering(465) 00:18:03.327 fused_ordering(466) 00:18:03.327 fused_ordering(467) 00:18:03.327 fused_ordering(468) 00:18:03.327 fused_ordering(469) 00:18:03.327 fused_ordering(470) 00:18:03.327 fused_ordering(471) 00:18:03.327 fused_ordering(472) 00:18:03.327 fused_ordering(473) 00:18:03.327 fused_ordering(474) 00:18:03.327 fused_ordering(475) 00:18:03.327 fused_ordering(476) 00:18:03.327 fused_ordering(477) 00:18:03.327 fused_ordering(478) 00:18:03.327 fused_ordering(479) 00:18:03.327 fused_ordering(480) 00:18:03.327 fused_ordering(481) 00:18:03.327 fused_ordering(482) 00:18:03.327 fused_ordering(483) 00:18:03.327 fused_ordering(484) 00:18:03.327 fused_ordering(485) 00:18:03.327 fused_ordering(486) 00:18:03.327 fused_ordering(487) 00:18:03.327 fused_ordering(488) 00:18:03.327 fused_ordering(489) 00:18:03.327 fused_ordering(490) 00:18:03.327 fused_ordering(491) 00:18:03.327 fused_ordering(492) 00:18:03.327 fused_ordering(493) 00:18:03.327 fused_ordering(494) 00:18:03.327 fused_ordering(495) 00:18:03.327 fused_ordering(496) 00:18:03.327 fused_ordering(497) 00:18:03.327 fused_ordering(498) 00:18:03.327 fused_ordering(499) 00:18:03.327 fused_ordering(500) 00:18:03.327 fused_ordering(501) 00:18:03.327 fused_ordering(502) 00:18:03.327 fused_ordering(503) 00:18:03.327 fused_ordering(504) 00:18:03.327 fused_ordering(505) 00:18:03.327 fused_ordering(506) 00:18:03.327 fused_ordering(507) 00:18:03.327 fused_ordering(508) 00:18:03.327 fused_ordering(509) 00:18:03.327 fused_ordering(510) 00:18:03.327 fused_ordering(511) 00:18:03.327 fused_ordering(512) 00:18:03.327 fused_ordering(513) 00:18:03.327 fused_ordering(514) 00:18:03.327 fused_ordering(515) 00:18:03.327 fused_ordering(516) 00:18:03.327 fused_ordering(517) 00:18:03.327 fused_ordering(518) 00:18:03.327 fused_ordering(519) 00:18:03.327 fused_ordering(520) 00:18:03.327 fused_ordering(521) 00:18:03.327 fused_ordering(522) 00:18:03.327 fused_ordering(523) 00:18:03.327 fused_ordering(524) 00:18:03.327 fused_ordering(525) 00:18:03.327 fused_ordering(526) 00:18:03.327 fused_ordering(527) 00:18:03.327 fused_ordering(528) 00:18:03.327 fused_ordering(529) 00:18:03.327 fused_ordering(530) 00:18:03.327 fused_ordering(531) 00:18:03.327 fused_ordering(532) 00:18:03.327 fused_ordering(533) 00:18:03.327 fused_ordering(534) 00:18:03.327 fused_ordering(535) 00:18:03.327 fused_ordering(536) 00:18:03.327 fused_ordering(537) 00:18:03.327 fused_ordering(538) 00:18:03.327 fused_ordering(539) 00:18:03.327 fused_ordering(540) 00:18:03.327 fused_ordering(541) 00:18:03.327 fused_ordering(542) 00:18:03.327 fused_ordering(543) 00:18:03.327 fused_ordering(544) 00:18:03.327 fused_ordering(545) 00:18:03.327 fused_ordering(546) 00:18:03.327 fused_ordering(547) 00:18:03.327 fused_ordering(548) 00:18:03.327 fused_ordering(549) 00:18:03.327 fused_ordering(550) 00:18:03.327 fused_ordering(551) 00:18:03.327 fused_ordering(552) 00:18:03.327 fused_ordering(553) 00:18:03.327 fused_ordering(554) 00:18:03.327 fused_ordering(555) 00:18:03.327 fused_ordering(556) 00:18:03.327 fused_ordering(557) 00:18:03.327 fused_ordering(558) 00:18:03.327 fused_ordering(559) 00:18:03.327 fused_ordering(560) 00:18:03.327 fused_ordering(561) 00:18:03.327 fused_ordering(562) 00:18:03.327 fused_ordering(563) 00:18:03.327 fused_ordering(564) 00:18:03.327 fused_ordering(565) 00:18:03.327 fused_ordering(566) 00:18:03.328 fused_ordering(567) 00:18:03.328 fused_ordering(568) 00:18:03.328 fused_ordering(569) 00:18:03.328 fused_ordering(570) 00:18:03.328 fused_ordering(571) 00:18:03.328 fused_ordering(572) 00:18:03.328 fused_ordering(573) 00:18:03.328 fused_ordering(574) 00:18:03.328 fused_ordering(575) 00:18:03.328 fused_ordering(576) 00:18:03.328 fused_ordering(577) 00:18:03.328 fused_ordering(578) 00:18:03.328 fused_ordering(579) 00:18:03.328 fused_ordering(580) 00:18:03.328 fused_ordering(581) 00:18:03.328 fused_ordering(582) 00:18:03.328 fused_ordering(583) 00:18:03.328 fused_ordering(584) 00:18:03.328 fused_ordering(585) 00:18:03.328 fused_ordering(586) 00:18:03.328 fused_ordering(587) 00:18:03.328 fused_ordering(588) 00:18:03.328 fused_ordering(589) 00:18:03.328 fused_ordering(590) 00:18:03.328 fused_ordering(591) 00:18:03.328 fused_ordering(592) 00:18:03.328 fused_ordering(593) 00:18:03.328 fused_ordering(594) 00:18:03.328 fused_ordering(595) 00:18:03.328 fused_ordering(596) 00:18:03.328 fused_ordering(597) 00:18:03.328 fused_ordering(598) 00:18:03.328 fused_ordering(599) 00:18:03.328 fused_ordering(600) 00:18:03.328 fused_ordering(601) 00:18:03.328 fused_ordering(602) 00:18:03.328 fused_ordering(603) 00:18:03.328 fused_ordering(604) 00:18:03.328 fused_ordering(605) 00:18:03.328 fused_ordering(606) 00:18:03.328 fused_ordering(607) 00:18:03.328 fused_ordering(608) 00:18:03.328 fused_ordering(609) 00:18:03.328 fused_ordering(610) 00:18:03.328 fused_ordering(611) 00:18:03.328 fused_ordering(612) 00:18:03.328 fused_ordering(613) 00:18:03.328 fused_ordering(614) 00:18:03.328 fused_ordering(615) 00:18:03.898 fused_ordering(616) 00:18:03.898 fused_ordering(617) 00:18:03.898 fused_ordering(618) 00:18:03.898 fused_ordering(619) 00:18:03.898 fused_ordering(620) 00:18:03.898 fused_ordering(621) 00:18:03.898 fused_ordering(622) 00:18:03.898 fused_ordering(623) 00:18:03.898 fused_ordering(624) 00:18:03.898 fused_ordering(625) 00:18:03.898 fused_ordering(626) 00:18:03.898 fused_ordering(627) 00:18:03.898 fused_ordering(628) 00:18:03.898 fused_ordering(629) 00:18:03.898 fused_ordering(630) 00:18:03.898 fused_ordering(631) 00:18:03.898 fused_ordering(632) 00:18:03.898 fused_ordering(633) 00:18:03.898 fused_ordering(634) 00:18:03.898 fused_ordering(635) 00:18:03.898 fused_ordering(636) 00:18:03.898 fused_ordering(637) 00:18:03.898 fused_ordering(638) 00:18:03.898 fused_ordering(639) 00:18:03.898 fused_ordering(640) 00:18:03.898 fused_ordering(641) 00:18:03.898 fused_ordering(642) 00:18:03.898 fused_ordering(643) 00:18:03.898 fused_ordering(644) 00:18:03.898 fused_ordering(645) 00:18:03.898 fused_ordering(646) 00:18:03.898 fused_ordering(647) 00:18:03.898 fused_ordering(648) 00:18:03.898 fused_ordering(649) 00:18:03.898 fused_ordering(650) 00:18:03.898 fused_ordering(651) 00:18:03.898 fused_ordering(652) 00:18:03.898 fused_ordering(653) 00:18:03.898 fused_ordering(654) 00:18:03.898 fused_ordering(655) 00:18:03.898 fused_ordering(656) 00:18:03.898 fused_ordering(657) 00:18:03.898 fused_ordering(658) 00:18:03.898 fused_ordering(659) 00:18:03.898 fused_ordering(660) 00:18:03.898 fused_ordering(661) 00:18:03.898 fused_ordering(662) 00:18:03.898 fused_ordering(663) 00:18:03.898 fused_ordering(664) 00:18:03.898 fused_ordering(665) 00:18:03.898 fused_ordering(666) 00:18:03.898 fused_ordering(667) 00:18:03.898 fused_ordering(668) 00:18:03.898 fused_ordering(669) 00:18:03.898 fused_ordering(670) 00:18:03.898 fused_ordering(671) 00:18:03.898 fused_ordering(672) 00:18:03.898 fused_ordering(673) 00:18:03.898 fused_ordering(674) 00:18:03.898 fused_ordering(675) 00:18:03.898 fused_ordering(676) 00:18:03.898 fused_ordering(677) 00:18:03.898 fused_ordering(678) 00:18:03.898 fused_ordering(679) 00:18:03.898 fused_ordering(680) 00:18:03.898 fused_ordering(681) 00:18:03.898 fused_ordering(682) 00:18:03.898 fused_ordering(683) 00:18:03.898 fused_ordering(684) 00:18:03.898 fused_ordering(685) 00:18:03.898 fused_ordering(686) 00:18:03.898 fused_ordering(687) 00:18:03.898 fused_ordering(688) 00:18:03.898 fused_ordering(689) 00:18:03.898 fused_ordering(690) 00:18:03.898 fused_ordering(691) 00:18:03.898 fused_ordering(692) 00:18:03.898 fused_ordering(693) 00:18:03.898 fused_ordering(694) 00:18:03.898 fused_ordering(695) 00:18:03.898 fused_ordering(696) 00:18:03.898 fused_ordering(697) 00:18:03.898 fused_ordering(698) 00:18:03.898 fused_ordering(699) 00:18:03.898 fused_ordering(700) 00:18:03.898 fused_ordering(701) 00:18:03.898 fused_ordering(702) 00:18:03.898 fused_ordering(703) 00:18:03.898 fused_ordering(704) 00:18:03.898 fused_ordering(705) 00:18:03.898 fused_ordering(706) 00:18:03.898 fused_ordering(707) 00:18:03.898 fused_ordering(708) 00:18:03.898 fused_ordering(709) 00:18:03.898 fused_ordering(710) 00:18:03.898 fused_ordering(711) 00:18:03.898 fused_ordering(712) 00:18:03.898 fused_ordering(713) 00:18:03.898 fused_ordering(714) 00:18:03.898 fused_ordering(715) 00:18:03.898 fused_ordering(716) 00:18:03.898 fused_ordering(717) 00:18:03.898 fused_ordering(718) 00:18:03.898 fused_ordering(719) 00:18:03.898 fused_ordering(720) 00:18:03.898 fused_ordering(721) 00:18:03.898 fused_ordering(722) 00:18:03.898 fused_ordering(723) 00:18:03.898 fused_ordering(724) 00:18:03.898 fused_ordering(725) 00:18:03.898 fused_ordering(726) 00:18:03.898 fused_ordering(727) 00:18:03.898 fused_ordering(728) 00:18:03.899 fused_ordering(729) 00:18:03.899 fused_ordering(730) 00:18:03.899 fused_ordering(731) 00:18:03.899 fused_ordering(732) 00:18:03.899 fused_ordering(733) 00:18:03.899 fused_ordering(734) 00:18:03.899 fused_ordering(735) 00:18:03.899 fused_ordering(736) 00:18:03.899 fused_ordering(737) 00:18:03.899 fused_ordering(738) 00:18:03.899 fused_ordering(739) 00:18:03.899 fused_ordering(740) 00:18:03.899 fused_ordering(741) 00:18:03.899 fused_ordering(742) 00:18:03.899 fused_ordering(743) 00:18:03.899 fused_ordering(744) 00:18:03.899 fused_ordering(745) 00:18:03.899 fused_ordering(746) 00:18:03.899 fused_ordering(747) 00:18:03.899 fused_ordering(748) 00:18:03.899 fused_ordering(749) 00:18:03.899 fused_ordering(750) 00:18:03.899 fused_ordering(751) 00:18:03.899 fused_ordering(752) 00:18:03.899 fused_ordering(753) 00:18:03.899 fused_ordering(754) 00:18:03.899 fused_ordering(755) 00:18:03.899 fused_ordering(756) 00:18:03.899 fused_ordering(757) 00:18:03.899 fused_ordering(758) 00:18:03.899 fused_ordering(759) 00:18:03.899 fused_ordering(760) 00:18:03.899 fused_ordering(761) 00:18:03.899 fused_ordering(762) 00:18:03.899 fused_ordering(763) 00:18:03.899 fused_ordering(764) 00:18:03.899 fused_ordering(765) 00:18:03.899 fused_ordering(766) 00:18:03.899 fused_ordering(767) 00:18:03.899 fused_ordering(768) 00:18:03.899 fused_ordering(769) 00:18:03.899 fused_ordering(770) 00:18:03.899 fused_ordering(771) 00:18:03.899 fused_ordering(772) 00:18:03.899 fused_ordering(773) 00:18:03.899 fused_ordering(774) 00:18:03.899 fused_ordering(775) 00:18:03.899 fused_ordering(776) 00:18:03.899 fused_ordering(777) 00:18:03.899 fused_ordering(778) 00:18:03.899 fused_ordering(779) 00:18:03.899 fused_ordering(780) 00:18:03.899 fused_ordering(781) 00:18:03.899 fused_ordering(782) 00:18:03.899 fused_ordering(783) 00:18:03.899 fused_ordering(784) 00:18:03.899 fused_ordering(785) 00:18:03.899 fused_ordering(786) 00:18:03.899 fused_ordering(787) 00:18:03.899 fused_ordering(788) 00:18:03.899 fused_ordering(789) 00:18:03.899 fused_ordering(790) 00:18:03.899 fused_ordering(791) 00:18:03.899 fused_ordering(792) 00:18:03.899 fused_ordering(793) 00:18:03.899 fused_ordering(794) 00:18:03.899 fused_ordering(795) 00:18:03.899 fused_ordering(796) 00:18:03.899 fused_ordering(797) 00:18:03.899 fused_ordering(798) 00:18:03.899 fused_ordering(799) 00:18:03.899 fused_ordering(800) 00:18:03.899 fused_ordering(801) 00:18:03.899 fused_ordering(802) 00:18:03.899 fused_ordering(803) 00:18:03.899 fused_ordering(804) 00:18:03.899 fused_ordering(805) 00:18:03.899 fused_ordering(806) 00:18:03.899 fused_ordering(807) 00:18:03.899 fused_ordering(808) 00:18:03.899 fused_ordering(809) 00:18:03.899 fused_ordering(810) 00:18:03.899 fused_ordering(811) 00:18:03.899 fused_ordering(812) 00:18:03.899 fused_ordering(813) 00:18:03.899 fused_ordering(814) 00:18:03.899 fused_ordering(815) 00:18:03.899 fused_ordering(816) 00:18:03.899 fused_ordering(817) 00:18:03.899 fused_ordering(818) 00:18:03.899 fused_ordering(819) 00:18:03.899 fused_ordering(820) 00:18:04.160 fused_ordering(821) 00:18:04.160 fused_ordering(822) 00:18:04.160 fused_ordering(823) 00:18:04.160 fused_ordering(824) 00:18:04.160 fused_ordering(825) 00:18:04.160 fused_ordering(826) 00:18:04.160 fused_ordering(827) 00:18:04.160 fused_ordering(828) 00:18:04.160 fused_ordering(829) 00:18:04.160 fused_ordering(830) 00:18:04.160 fused_ordering(831) 00:18:04.160 fused_ordering(832) 00:18:04.160 fused_ordering(833) 00:18:04.160 fused_ordering(834) 00:18:04.160 fused_ordering(835) 00:18:04.160 fused_ordering(836) 00:18:04.160 fused_ordering(837) 00:18:04.160 fused_ordering(838) 00:18:04.160 fused_ordering(839) 00:18:04.160 fused_ordering(840) 00:18:04.160 fused_ordering(841) 00:18:04.160 fused_ordering(842) 00:18:04.160 fused_ordering(843) 00:18:04.160 fused_ordering(844) 00:18:04.160 fused_ordering(845) 00:18:04.160 fused_ordering(846) 00:18:04.160 fused_ordering(847) 00:18:04.160 fused_ordering(848) 00:18:04.160 fused_ordering(849) 00:18:04.160 fused_ordering(850) 00:18:04.160 fused_ordering(851) 00:18:04.160 fused_ordering(852) 00:18:04.160 fused_ordering(853) 00:18:04.160 fused_ordering(854) 00:18:04.160 fused_ordering(855) 00:18:04.160 fused_ordering(856) 00:18:04.160 fused_ordering(857) 00:18:04.160 fused_ordering(858) 00:18:04.160 fused_ordering(859) 00:18:04.160 fused_ordering(860) 00:18:04.160 fused_ordering(861) 00:18:04.160 fused_ordering(862) 00:18:04.160 fused_ordering(863) 00:18:04.160 fused_ordering(864) 00:18:04.160 fused_ordering(865) 00:18:04.160 fused_ordering(866) 00:18:04.160 fused_ordering(867) 00:18:04.160 fused_ordering(868) 00:18:04.160 fused_ordering(869) 00:18:04.160 fused_ordering(870) 00:18:04.160 fused_ordering(871) 00:18:04.160 fused_ordering(872) 00:18:04.160 fused_ordering(873) 00:18:04.160 fused_ordering(874) 00:18:04.160 fused_ordering(875) 00:18:04.160 fused_ordering(876) 00:18:04.160 fused_ordering(877) 00:18:04.160 fused_ordering(878) 00:18:04.160 fused_ordering(879) 00:18:04.160 fused_ordering(880) 00:18:04.160 fused_ordering(881) 00:18:04.160 fused_ordering(882) 00:18:04.160 fused_ordering(883) 00:18:04.160 fused_ordering(884) 00:18:04.160 fused_ordering(885) 00:18:04.160 fused_ordering(886) 00:18:04.160 fused_ordering(887) 00:18:04.160 fused_ordering(888) 00:18:04.160 fused_ordering(889) 00:18:04.160 fused_ordering(890) 00:18:04.160 fused_ordering(891) 00:18:04.160 fused_ordering(892) 00:18:04.160 fused_ordering(893) 00:18:04.160 fused_ordering(894) 00:18:04.160 fused_ordering(895) 00:18:04.160 fused_ordering(896) 00:18:04.160 fused_ordering(897) 00:18:04.160 fused_ordering(898) 00:18:04.160 fused_ordering(899) 00:18:04.160 fused_ordering(900) 00:18:04.160 fused_ordering(901) 00:18:04.160 fused_ordering(902) 00:18:04.160 fused_ordering(903) 00:18:04.160 fused_ordering(904) 00:18:04.160 fused_ordering(905) 00:18:04.160 fused_ordering(906) 00:18:04.160 fused_ordering(907) 00:18:04.160 fused_ordering(908) 00:18:04.160 fused_ordering(909) 00:18:04.160 fused_ordering(910) 00:18:04.160 fused_ordering(911) 00:18:04.160 fused_ordering(912) 00:18:04.160 fused_ordering(913) 00:18:04.160 fused_ordering(914) 00:18:04.160 fused_ordering(915) 00:18:04.160 fused_ordering(916) 00:18:04.160 fused_ordering(917) 00:18:04.160 fused_ordering(918) 00:18:04.160 fused_ordering(919) 00:18:04.160 fused_ordering(920) 00:18:04.160 fused_ordering(921) 00:18:04.160 fused_ordering(922) 00:18:04.160 fused_ordering(923) 00:18:04.160 fused_ordering(924) 00:18:04.160 fused_ordering(925) 00:18:04.160 fused_ordering(926) 00:18:04.160 fused_ordering(927) 00:18:04.160 fused_ordering(928) 00:18:04.160 fused_ordering(929) 00:18:04.160 fused_ordering(930) 00:18:04.160 fused_ordering(931) 00:18:04.160 fused_ordering(932) 00:18:04.160 fused_ordering(933) 00:18:04.160 fused_ordering(934) 00:18:04.160 fused_ordering(935) 00:18:04.160 fused_ordering(936) 00:18:04.160 fused_ordering(937) 00:18:04.160 fused_ordering(938) 00:18:04.160 fused_ordering(939) 00:18:04.160 fused_ordering(940) 00:18:04.160 fused_ordering(941) 00:18:04.160 fused_ordering(942) 00:18:04.160 fused_ordering(943) 00:18:04.160 fused_ordering(944) 00:18:04.160 fused_ordering(945) 00:18:04.160 fused_ordering(946) 00:18:04.160 fused_ordering(947) 00:18:04.160 fused_ordering(948) 00:18:04.160 fused_ordering(949) 00:18:04.160 fused_ordering(950) 00:18:04.160 fused_ordering(951) 00:18:04.160 fused_ordering(952) 00:18:04.160 fused_ordering(953) 00:18:04.160 fused_ordering(954) 00:18:04.160 fused_ordering(955) 00:18:04.160 fused_ordering(956) 00:18:04.160 fused_ordering(957) 00:18:04.160 fused_ordering(958) 00:18:04.160 fused_ordering(959) 00:18:04.160 fused_ordering(960) 00:18:04.160 fused_ordering(961) 00:18:04.160 fused_ordering(962) 00:18:04.160 fused_ordering(963) 00:18:04.160 fused_ordering(964) 00:18:04.160 fused_ordering(965) 00:18:04.160 fused_ordering(966) 00:18:04.160 fused_ordering(967) 00:18:04.160 fused_ordering(968) 00:18:04.160 fused_ordering(969) 00:18:04.160 fused_ordering(970) 00:18:04.160 fused_ordering(971) 00:18:04.160 fused_ordering(972) 00:18:04.160 fused_ordering(973) 00:18:04.160 fused_ordering(974) 00:18:04.160 fused_ordering(975) 00:18:04.160 fused_ordering(976) 00:18:04.160 fused_ordering(977) 00:18:04.160 fused_ordering(978) 00:18:04.160 fused_ordering(979) 00:18:04.160 fused_ordering(980) 00:18:04.160 fused_ordering(981) 00:18:04.160 fused_ordering(982) 00:18:04.160 fused_ordering(983) 00:18:04.160 fused_ordering(984) 00:18:04.160 fused_ordering(985) 00:18:04.160 fused_ordering(986) 00:18:04.161 fused_ordering(987) 00:18:04.161 fused_ordering(988) 00:18:04.161 fused_ordering(989) 00:18:04.161 fused_ordering(990) 00:18:04.161 fused_ordering(991) 00:18:04.161 fused_ordering(992) 00:18:04.161 fused_ordering(993) 00:18:04.161 fused_ordering(994) 00:18:04.161 fused_ordering(995) 00:18:04.161 fused_ordering(996) 00:18:04.161 fused_ordering(997) 00:18:04.161 fused_ordering(998) 00:18:04.161 fused_ordering(999) 00:18:04.161 fused_ordering(1000) 00:18:04.161 fused_ordering(1001) 00:18:04.161 fused_ordering(1002) 00:18:04.161 fused_ordering(1003) 00:18:04.161 fused_ordering(1004) 00:18:04.161 fused_ordering(1005) 00:18:04.161 fused_ordering(1006) 00:18:04.161 fused_ordering(1007) 00:18:04.161 fused_ordering(1008) 00:18:04.161 fused_ordering(1009) 00:18:04.161 fused_ordering(1010) 00:18:04.161 fused_ordering(1011) 00:18:04.161 fused_ordering(1012) 00:18:04.161 fused_ordering(1013) 00:18:04.161 fused_ordering(1014) 00:18:04.161 fused_ordering(1015) 00:18:04.161 fused_ordering(1016) 00:18:04.161 fused_ordering(1017) 00:18:04.161 fused_ordering(1018) 00:18:04.161 fused_ordering(1019) 00:18:04.161 fused_ordering(1020) 00:18:04.161 fused_ordering(1021) 00:18:04.161 fused_ordering(1022) 00:18:04.161 fused_ordering(1023) 00:18:04.161 09:30:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:04.161 09:30:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:04.161 09:30:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:04.161 09:30:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:04.421 rmmod nvme_tcp 00:18:04.421 rmmod nvme_fabrics 00:18:04.421 rmmod nvme_keyring 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 234860 ']' 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 234860 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 234860 ']' 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 234860 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 234860 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 234860' 00:18:04.421 killing process with pid 234860 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 234860 00:18:04.421 [2024-05-16 09:30:57.857000] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:04.421 09:30:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 234860 00:18:04.681 09:30:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:04.681 09:30:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:04.681 09:30:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:04.681 09:30:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:04.681 09:30:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:04.681 09:30:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.681 09:30:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.681 09:30:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.591 09:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:06.591 00:18:06.591 real 0m13.147s 00:18:06.591 user 0m7.871s 00:18:06.591 sys 0m6.384s 00:18:06.591 09:31:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:06.591 09:31:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:06.591 ************************************ 00:18:06.591 END TEST nvmf_fused_ordering 00:18:06.591 ************************************ 00:18:06.591 09:31:00 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:18:06.591 09:31:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:06.591 09:31:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:06.591 09:31:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:06.852 ************************************ 00:18:06.852 START TEST nvmf_delete_subsystem 00:18:06.852 ************************************ 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:18:06.852 * Looking for test storage... 00:18:06.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:18:06.852 09:31:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:13.443 09:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:13.443 09:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:18:13.443 09:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:13.443 09:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:13.443 09:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:13.443 09:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:13.443 09:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:13.443 09:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:18:13.443 09:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:13.443 09:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:18:13.443 09:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:18:13.443 09:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:18:13.443 09:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:18:13.443 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:18:13.443 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:18:13.443 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:13.443 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:13.443 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:13.705 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:13.705 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:13.705 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:13.705 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:13.705 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:13.706 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:13.706 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:13.706 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:13.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:18:13.967 00:18:13.967 --- 10.0.0.2 ping statistics --- 00:18:13.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.967 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:13.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:18:13.967 00:18:13.967 --- 10.0.0.1 ping statistics --- 00:18:13.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.967 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=239871 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 239871 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 239871 ']' 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:13.967 09:31:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:13.967 [2024-05-16 09:31:07.399343] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:18:13.967 [2024-05-16 09:31:07.399406] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.967 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.967 [2024-05-16 09:31:07.469994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:14.227 [2024-05-16 09:31:07.543775] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.227 [2024-05-16 09:31:07.543813] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.227 [2024-05-16 09:31:07.543821] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.227 [2024-05-16 09:31:07.543827] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.227 [2024-05-16 09:31:07.543833] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.228 [2024-05-16 09:31:07.543967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.228 [2024-05-16 09:31:07.543968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:14.800 [2024-05-16 09:31:08.231788] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:14.800 [2024-05-16 09:31:08.247772] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:14.800 [2024-05-16 09:31:08.247947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:14.800 NULL1 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.800 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:14.800 Delay0 00:18:14.801 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.801 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:14.801 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.801 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:14.801 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.801 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=240030 00:18:14.801 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:18:14.801 09:31:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:18:14.801 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.801 [2024-05-16 09:31:08.332714] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:17.351 09:31:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.351 09:31:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.351 09:31:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 starting I/O failed: -6 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 starting I/O failed: -6 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 starting I/O failed: -6 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 starting I/O failed: -6 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 starting I/O failed: -6 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 starting I/O failed: -6 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 starting I/O failed: -6 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 starting I/O failed: -6 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 starting I/O failed: -6 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 starting I/O failed: -6 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 starting I/O failed: -6 00:18:17.351 starting I/O failed: -6 00:18:17.351 starting I/O failed: -6 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 [2024-05-16 09:31:10.578175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72dc80 is same with the state(5) to be set 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 starting I/O failed: -6 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 starting I/O failed: -6 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 starting I/O failed: -6 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 starting I/O failed: -6 00:18:17.351 Write completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 Read completed with error (sct=0, sc=8) 00:18:17.351 starting I/O failed: -6 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 starting I/O failed: -6 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 starting I/O failed: -6 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 starting I/O failed: -6 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 starting I/O failed: -6 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 starting I/O failed: -6 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 [2024-05-16 09:31:10.582654] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f99cc00c470 is same with the state(5) to be set 00:18:17.352 starting I/O failed: -6 00:18:17.352 starting I/O failed: -6 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Write completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:17.352 Read completed with error (sct=0, sc=8) 00:18:18.296 [2024-05-16 09:31:11.555117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70d550 is same with the state(5) to be set 00:18:18.296 Read completed with error (sct=0, sc=8) 00:18:18.296 Read completed with error (sct=0, sc=8) 00:18:18.296 Write completed with error (sct=0, sc=8) 00:18:18.296 Read completed with error (sct=0, sc=8) 00:18:18.296 Read completed with error (sct=0, sc=8) 00:18:18.296 Read completed with error (sct=0, sc=8) 00:18:18.296 Read completed with error (sct=0, sc=8) 00:18:18.296 Read completed with error (sct=0, sc=8) 00:18:18.296 Read completed with error (sct=0, sc=8) 00:18:18.296 Read completed with error (sct=0, sc=8) 00:18:18.296 Write completed with error (sct=0, sc=8) 00:18:18.296 Read completed with error (sct=0, sc=8) 00:18:18.296 Write completed with error (sct=0, sc=8) 00:18:18.296 Read completed with error (sct=0, sc=8) 00:18:18.296 Read completed with error (sct=0, sc=8) 00:18:18.296 Write completed with error (sct=0, sc=8) 00:18:18.296 Read completed with error (sct=0, sc=8) 00:18:18.296 Read completed with error (sct=0, sc=8) 00:18:18.296 Read completed with error (sct=0, sc=8) 00:18:18.296 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Write completed with error (sct=0, sc=8) 00:18:18.297 Write completed with error (sct=0, sc=8) 00:18:18.297 [2024-05-16 09:31:11.580592] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72e220 is same with the state(5) to be set 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Write completed with error (sct=0, sc=8) 00:18:18.297 Write completed with error (sct=0, sc=8) 00:18:18.297 Write completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Write completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Write completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Write completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 [2024-05-16 09:31:11.580686] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72de60 is same with the state(5) to be set 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Write completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Write completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Write completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Write completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 [2024-05-16 09:31:11.584595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f99cc00bfe0 is same with the state(5) to be set 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Write completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Write completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Write completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 Read completed with error (sct=0, sc=8) 00:18:18.297 [2024-05-16 09:31:11.584674] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f99cc00c780 is same with the state(5) to be set 00:18:18.297 Initializing NVMe Controllers 00:18:18.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:18.297 Controller IO queue size 128, less than required. 00:18:18.297 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:18.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:18:18.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:18:18.297 Initialization complete. Launching workers. 00:18:18.297 ======================================================== 00:18:18.297 Latency(us) 00:18:18.297 Device Information : IOPS MiB/s Average min max 00:18:18.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.91 0.08 899764.36 301.36 1006411.25 00:18:18.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.94 0.08 915375.07 246.98 1010246.58 00:18:18.297 ======================================================== 00:18:18.297 Total : 330.85 0.16 907405.14 246.98 1010246.58 00:18:18.297 00:18:18.297 [2024-05-16 09:31:11.585377] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x70d550 (9): Bad file descriptor 00:18:18.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:18:18.297 09:31:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.297 09:31:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:18:18.297 09:31:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 240030 00:18:18.297 09:31:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 240030 00:18:18.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (240030) - No such process 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 240030 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 240030 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 240030 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.559 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:18.559 [2024-05-16 09:31:12.113592] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.820 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.820 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:18.820 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.820 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:18.820 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.820 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=240904 00:18:18.820 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:18:18.820 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:18:18.820 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 240904 00:18:18.820 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:18.820 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.820 [2024-05-16 09:31:12.173536] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:19.082 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:19.082 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 240904 00:18:19.082 09:31:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:19.655 09:31:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:19.655 09:31:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 240904 00:18:19.655 09:31:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:20.226 09:31:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:20.226 09:31:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 240904 00:18:20.226 09:31:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:20.798 09:31:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:20.798 09:31:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 240904 00:18:20.798 09:31:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:21.371 09:31:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:21.371 09:31:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 240904 00:18:21.371 09:31:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:21.632 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:21.632 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 240904 00:18:21.632 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:21.893 Initializing NVMe Controllers 00:18:21.893 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:21.893 Controller IO queue size 128, less than required. 00:18:21.893 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:21.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:18:21.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:18:21.893 Initialization complete. Launching workers. 00:18:21.893 ======================================================== 00:18:21.893 Latency(us) 00:18:21.893 Device Information : IOPS MiB/s Average min max 00:18:21.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002151.92 1000181.58 1040749.97 00:18:21.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003133.23 1000161.59 1041025.38 00:18:21.893 ======================================================== 00:18:21.893 Total : 256.00 0.12 1002642.57 1000161.59 1041025.38 00:18:21.893 00:18:22.154 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:22.154 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 240904 00:18:22.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (240904) - No such process 00:18:22.154 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 240904 00:18:22.154 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:18:22.154 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:18:22.154 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:22.154 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:18:22.154 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:22.154 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:18:22.154 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:22.154 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:22.154 rmmod nvme_tcp 00:18:22.154 rmmod nvme_fabrics 00:18:22.154 rmmod nvme_keyring 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 239871 ']' 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 239871 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 239871 ']' 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 239871 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 239871 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 239871' 00:18:22.417 killing process with pid 239871 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 239871 00:18:22.417 [2024-05-16 09:31:15.779921] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 239871 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.417 09:31:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.967 09:31:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:24.967 00:18:24.967 real 0m17.821s 00:18:24.967 user 0m31.072s 00:18:24.967 sys 0m6.128s 00:18:24.967 09:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:24.967 09:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:24.967 ************************************ 00:18:24.967 END TEST nvmf_delete_subsystem 00:18:24.967 ************************************ 00:18:24.967 09:31:18 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:24.967 09:31:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:24.967 09:31:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:24.967 09:31:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:24.967 ************************************ 00:18:24.967 START TEST nvmf_ns_masking 00:18:24.967 ************************************ 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:24.967 * Looking for test storage... 00:18:24.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.967 09:31:18 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=e04cf907-55eb-441d-8fb6-a2e7a097df52 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:18:24.968 09:31:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:31.558 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:31.558 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:31.558 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.558 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:31.558 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:31.559 09:31:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:31.559 09:31:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:31.559 09:31:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:31.559 09:31:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:31.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:18:31.559 00:18:31.559 --- 10.0.0.2 ping statistics --- 00:18:31.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.559 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:18:31.559 09:31:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:31.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:18:31.559 00:18:31.559 --- 10.0.0.1 ping statistics --- 00:18:31.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.559 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:18:31.559 09:31:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.559 09:31:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:18:31.559 09:31:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:31.559 09:31:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.559 09:31:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:31.559 09:31:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:31.559 09:31:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.559 09:31:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:31.559 09:31:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:31.821 09:31:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:18:31.821 09:31:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.821 09:31:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:31.821 09:31:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:31.821 09:31:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=245597 00:18:31.821 09:31:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 245597 00:18:31.821 09:31:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:31.821 09:31:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 245597 ']' 00:18:31.821 09:31:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.821 09:31:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:31.821 09:31:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.821 09:31:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:31.821 09:31:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:31.821 [2024-05-16 09:31:25.215377] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:18:31.821 [2024-05-16 09:31:25.215443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.821 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.821 [2024-05-16 09:31:25.287782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:31.821 [2024-05-16 09:31:25.365755] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.821 [2024-05-16 09:31:25.365793] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.821 [2024-05-16 09:31:25.365801] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.821 [2024-05-16 09:31:25.365808] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.821 [2024-05-16 09:31:25.365813] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.821 [2024-05-16 09:31:25.365951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.821 [2024-05-16 09:31:25.366070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.821 [2024-05-16 09:31:25.366172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.821 [2024-05-16 09:31:25.366173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.764 09:31:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:32.764 09:31:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:18:32.764 09:31:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:32.764 09:31:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:32.764 09:31:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:32.764 09:31:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.764 09:31:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:32.764 [2024-05-16 09:31:26.185058] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.764 09:31:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:18:32.764 09:31:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:18:32.764 09:31:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:33.025 Malloc1 00:18:33.025 09:31:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:33.025 Malloc2 00:18:33.025 09:31:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:33.287 09:31:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:33.549 09:31:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.549 [2024-05-16 09:31:27.041852] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:33.549 [2024-05-16 09:31:27.042106] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.549 09:31:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:18:33.549 09:31:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e04cf907-55eb-441d-8fb6-a2e7a097df52 -a 10.0.0.2 -s 4420 -i 4 00:18:33.810 09:31:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:18:33.810 09:31:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:18:33.810 09:31:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:33.810 09:31:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:33.810 09:31:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:18:35.726 09:31:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:35.726 09:31:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:35.726 09:31:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:35.726 09:31:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:35.726 09:31:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:35.726 09:31:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:18:35.726 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:18:35.726 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:35.726 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:18:35.726 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:18:35.726 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:18:35.726 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:35.726 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:35.726 [ 0]:0x1 00:18:35.726 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:35.726 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:35.988 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f6b437995a964e8590c56b72f3882720 00:18:35.988 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f6b437995a964e8590c56b72f3882720 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:35.988 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:35.988 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:18:35.988 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:35.988 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:35.988 [ 0]:0x1 00:18:35.988 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:35.988 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:35.988 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f6b437995a964e8590c56b72f3882720 00:18:35.988 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f6b437995a964e8590c56b72f3882720 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:35.988 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:18:35.988 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:35.988 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:35.988 [ 1]:0x2 00:18:36.249 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:36.249 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:36.249 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c8dee0779e4342228d5afc85e120d0d5 00:18:36.249 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c8dee0779e4342228d5afc85e120d0d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:36.249 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:18:36.249 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:36.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.510 09:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:36.510 09:31:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:36.771 09:31:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:18:36.771 09:31:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e04cf907-55eb-441d-8fb6-a2e7a097df52 -a 10.0.0.2 -s 4420 -i 4 00:18:37.033 09:31:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:37.033 09:31:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:18:37.033 09:31:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:37.033 09:31:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:18:37.033 09:31:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:18:37.033 09:31:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:18:38.948 09:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:38.948 09:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:38.948 09:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:38.949 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:39.213 [ 0]:0x2 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c8dee0779e4342228d5afc85e120d0d5 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c8dee0779e4342228d5afc85e120d0d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:39.213 [ 0]:0x1 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:39.213 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:39.473 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f6b437995a964e8590c56b72f3882720 00:18:39.473 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f6b437995a964e8590c56b72f3882720 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:39.473 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:18:39.473 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:39.473 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:39.473 [ 1]:0x2 00:18:39.473 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:39.473 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:39.473 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c8dee0779e4342228d5afc85e120d0d5 00:18:39.473 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c8dee0779e4342228d5afc85e120d0d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:39.473 09:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:39.473 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:18:39.473 09:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:18:39.473 09:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:18:39.473 09:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:18:39.473 09:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:39.473 09:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:18:39.473 09:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:39.473 09:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:18:39.473 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:39.473 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:39.734 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:39.734 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:39.734 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:39.734 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:39.734 09:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:18:39.734 09:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:39.734 09:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:39.734 09:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:39.734 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:18:39.734 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:39.734 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:39.734 [ 0]:0x2 00:18:39.734 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:39.734 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:39.734 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c8dee0779e4342228d5afc85e120d0d5 00:18:39.734 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c8dee0779e4342228d5afc85e120d0d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:39.734 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:18:39.734 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:39.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:39.734 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:39.995 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:18:39.995 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e04cf907-55eb-441d-8fb6-a2e7a097df52 -a 10.0.0.2 -s 4420 -i 4 00:18:40.255 09:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:40.255 09:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:18:40.255 09:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:40.255 09:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:18:40.255 09:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:18:40.255 09:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:42.167 [ 0]:0x1 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=f6b437995a964e8590c56b72f3882720 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ f6b437995a964e8590c56b72f3882720 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:42.167 [ 1]:0x2 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:42.167 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c8dee0779e4342228d5afc85e120d0d5 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c8dee0779e4342228d5afc85e120d0d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:42.430 09:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:42.692 [ 0]:0x2 00:18:42.692 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:42.692 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:42.692 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c8dee0779e4342228d5afc85e120d0d5 00:18:42.692 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c8dee0779e4342228d5afc85e120d0d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:42.692 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:42.692 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:18:42.692 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:42.692 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:42.692 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.692 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:42.692 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.692 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:42.693 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.693 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:42.693 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:42.693 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:42.693 [2024-05-16 09:31:36.226353] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:42.693 request: 00:18:42.693 { 00:18:42.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.693 "nsid": 2, 00:18:42.693 "host": "nqn.2016-06.io.spdk:host1", 00:18:42.693 "method": "nvmf_ns_remove_host", 00:18:42.693 "req_id": 1 00:18:42.693 } 00:18:42.693 Got JSON-RPC error response 00:18:42.693 response: 00:18:42.693 { 00:18:42.693 "code": -32602, 00:18:42.693 "message": "Invalid parameters" 00:18:42.693 } 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:42.955 [ 0]:0x2 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c8dee0779e4342228d5afc85e120d0d5 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c8dee0779e4342228d5afc85e120d0d5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:18:42.955 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:43.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.217 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:43.217 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:43.217 09:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:18:43.217 09:31:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:43.217 09:31:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:18:43.217 09:31:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:43.217 09:31:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:18:43.217 09:31:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:43.217 09:31:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:43.217 rmmod nvme_tcp 00:18:43.217 rmmod nvme_fabrics 00:18:43.217 rmmod nvme_keyring 00:18:43.478 09:31:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:43.479 09:31:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:18:43.479 09:31:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:18:43.479 09:31:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 245597 ']' 00:18:43.479 09:31:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 245597 00:18:43.479 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 245597 ']' 00:18:43.479 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 245597 00:18:43.479 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:18:43.479 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:43.479 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 245597 00:18:43.479 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:43.479 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:43.479 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 245597' 00:18:43.479 killing process with pid 245597 00:18:43.479 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 245597 00:18:43.479 [2024-05-16 09:31:36.849948] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:43.479 09:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 245597 00:18:43.479 09:31:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:43.479 09:31:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:43.479 09:31:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:43.479 09:31:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:43.479 09:31:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:43.479 09:31:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.479 09:31:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.479 09:31:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.031 09:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:46.031 00:18:46.031 real 0m21.013s 00:18:46.031 user 0m51.054s 00:18:46.031 sys 0m6.659s 00:18:46.031 09:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:46.031 09:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:46.031 ************************************ 00:18:46.031 END TEST nvmf_ns_masking 00:18:46.031 ************************************ 00:18:46.031 09:31:39 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:18:46.031 09:31:39 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:46.031 09:31:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:46.031 09:31:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:46.031 09:31:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:46.031 ************************************ 00:18:46.031 START TEST nvmf_nvme_cli 00:18:46.031 ************************************ 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:46.031 * Looking for test storage... 00:18:46.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.031 09:31:39 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:18:46.032 09:31:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:52.658 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:52.658 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:52.658 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.658 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:52.658 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.659 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.920 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.920 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.920 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:52.920 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.920 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.920 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.920 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:52.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:18:52.921 00:18:52.921 --- 10.0.0.2 ping statistics --- 00:18:52.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.921 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:18:52.921 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.534 ms 00:18:52.921 00:18:52.921 --- 10.0.0.1 ping statistics --- 00:18:52.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.921 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:18:52.921 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.921 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:18:52.921 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:52.921 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.921 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:52.921 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:52.921 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.921 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:52.921 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:53.183 09:31:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:53.183 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:53.183 09:31:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:53.183 09:31:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:53.183 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=252387 00:18:53.183 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 252387 00:18:53.183 09:31:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:53.183 09:31:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 252387 ']' 00:18:53.183 09:31:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.183 09:31:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:53.183 09:31:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.183 09:31:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:53.183 09:31:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:53.183 [2024-05-16 09:31:46.542403] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:18:53.183 [2024-05-16 09:31:46.542464] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.183 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.183 [2024-05-16 09:31:46.613085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:53.183 [2024-05-16 09:31:46.690302] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.183 [2024-05-16 09:31:46.690340] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.183 [2024-05-16 09:31:46.690347] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.183 [2024-05-16 09:31:46.690354] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.183 [2024-05-16 09:31:46.690360] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.183 [2024-05-16 09:31:46.690525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.183 [2024-05-16 09:31:46.690646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.183 [2024-05-16 09:31:46.690804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.183 [2024-05-16 09:31:46.690805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:54.126 [2024-05-16 09:31:47.372639] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:54.126 Malloc0 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:54.126 Malloc1 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.126 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:54.127 [2024-05-16 09:31:47.459838] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:54.127 [2024-05-16 09:31:47.460080] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 4420 00:18:54.127 00:18:54.127 Discovery Log Number of Records 2, Generation counter 2 00:18:54.127 =====Discovery Log Entry 0====== 00:18:54.127 trtype: tcp 00:18:54.127 adrfam: ipv4 00:18:54.127 subtype: current discovery subsystem 00:18:54.127 treq: not required 00:18:54.127 portid: 0 00:18:54.127 trsvcid: 4420 00:18:54.127 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:54.127 traddr: 10.0.0.2 00:18:54.127 eflags: explicit discovery connections, duplicate discovery information 00:18:54.127 sectype: none 00:18:54.127 =====Discovery Log Entry 1====== 00:18:54.127 trtype: tcp 00:18:54.127 adrfam: ipv4 00:18:54.127 subtype: nvme subsystem 00:18:54.127 treq: not required 00:18:54.127 portid: 0 00:18:54.127 trsvcid: 4420 00:18:54.127 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:54.127 traddr: 10.0.0.2 00:18:54.127 eflags: none 00:18:54.127 sectype: none 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:54.127 09:31:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:56.042 09:31:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:56.042 09:31:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:18:56.042 09:31:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:56.042 09:31:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:18:56.042 09:31:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:18:56.042 09:31:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:18:57.958 /dev/nvme0n1 ]] 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:57.958 09:31:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:58.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:58.220 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:58.220 rmmod nvme_tcp 00:18:58.482 rmmod nvme_fabrics 00:18:58.482 rmmod nvme_keyring 00:18:58.482 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:58.482 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:18:58.482 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:18:58.482 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 252387 ']' 00:18:58.482 09:31:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 252387 00:18:58.482 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 252387 ']' 00:18:58.482 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 252387 00:18:58.482 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:18:58.482 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:58.482 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 252387 00:18:58.482 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:58.482 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:58.482 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 252387' 00:18:58.482 killing process with pid 252387 00:18:58.482 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 252387 00:18:58.482 [2024-05-16 09:31:51.885784] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:58.482 09:31:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 252387 00:18:58.482 09:31:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:58.482 09:31:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:58.742 09:31:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:58.742 09:31:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:58.742 09:31:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:58.742 09:31:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.742 09:31:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.742 09:31:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.655 09:31:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:00.655 00:19:00.655 real 0m14.945s 00:19:00.655 user 0m23.537s 00:19:00.655 sys 0m5.787s 00:19:00.655 09:31:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:00.655 09:31:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:00.655 ************************************ 00:19:00.655 END TEST nvmf_nvme_cli 00:19:00.655 ************************************ 00:19:00.655 09:31:54 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:19:00.655 09:31:54 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:00.655 09:31:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:00.655 09:31:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:00.655 09:31:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:00.655 ************************************ 00:19:00.655 START TEST nvmf_vfio_user 00:19:00.655 ************************************ 00:19:00.655 09:31:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:00.918 * Looking for test storage... 00:19:00.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:00.918 09:31:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:00.918 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:19:00.918 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.918 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.918 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.918 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.918 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.918 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.918 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.918 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.918 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.918 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.918 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:19:00.918 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:19:00.918 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.918 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.918 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:00.918 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=253909 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 253909' 00:19:00.919 Process pid: 253909 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 253909 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 253909 ']' 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:00.919 09:31:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:00.919 [2024-05-16 09:31:54.397527] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:19:00.919 [2024-05-16 09:31:54.397599] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.919 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.919 [2024-05-16 09:31:54.462877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:01.179 [2024-05-16 09:31:54.537672] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.179 [2024-05-16 09:31:54.537711] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.179 [2024-05-16 09:31:54.537719] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.179 [2024-05-16 09:31:54.537725] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.179 [2024-05-16 09:31:54.537731] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.179 [2024-05-16 09:31:54.537864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.179 [2024-05-16 09:31:54.537977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.180 [2024-05-16 09:31:54.538110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.180 [2024-05-16 09:31:54.538110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:01.750 09:31:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:01.750 09:31:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:19:01.750 09:31:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:02.692 09:31:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:19:02.951 09:31:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:02.951 09:31:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:02.951 09:31:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:02.951 09:31:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:02.951 09:31:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:03.211 Malloc1 00:19:03.211 09:31:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:03.211 09:31:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:03.471 09:31:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:03.471 [2024-05-16 09:31:57.025051] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:03.732 09:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:03.732 09:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:03.732 09:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:03.732 Malloc2 00:19:03.732 09:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:03.993 09:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:04.254 09:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:04.254 09:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:19:04.254 09:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:19:04.254 09:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:04.254 09:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:04.254 09:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:19:04.254 09:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:04.254 [2024-05-16 09:31:57.746655] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:19:04.254 [2024-05-16 09:31:57.746716] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid254586 ] 00:19:04.254 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.254 [2024-05-16 09:31:57.780688] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:19:04.254 [2024-05-16 09:31:57.789408] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:04.254 [2024-05-16 09:31:57.789427] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f03b618e000 00:19:04.254 [2024-05-16 09:31:57.790401] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:04.254 [2024-05-16 09:31:57.791409] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:04.254 [2024-05-16 09:31:57.792415] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:04.254 [2024-05-16 09:31:57.793421] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:04.254 [2024-05-16 09:31:57.794426] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:04.254 [2024-05-16 09:31:57.795437] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:04.254 [2024-05-16 09:31:57.796441] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:04.254 [2024-05-16 09:31:57.797450] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:04.254 [2024-05-16 09:31:57.798451] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:04.254 [2024-05-16 09:31:57.798464] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f03b6183000 00:19:04.254 [2024-05-16 09:31:57.799792] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:04.517 [2024-05-16 09:31:57.816724] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:19:04.517 [2024-05-16 09:31:57.816750] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:19:04.517 [2024-05-16 09:31:57.821596] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:04.518 [2024-05-16 09:31:57.821648] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:04.518 [2024-05-16 09:31:57.821731] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:19:04.518 [2024-05-16 09:31:57.821747] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:19:04.518 [2024-05-16 09:31:57.821753] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:19:04.518 [2024-05-16 09:31:57.822596] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:19:04.518 [2024-05-16 09:31:57.822605] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:19:04.518 [2024-05-16 09:31:57.822612] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:19:04.518 [2024-05-16 09:31:57.823598] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:04.518 [2024-05-16 09:31:57.823607] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:19:04.518 [2024-05-16 09:31:57.823614] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:19:04.518 [2024-05-16 09:31:57.824600] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:19:04.518 [2024-05-16 09:31:57.824608] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:04.518 [2024-05-16 09:31:57.825606] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:19:04.518 [2024-05-16 09:31:57.825614] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:19:04.518 [2024-05-16 09:31:57.825619] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:19:04.518 [2024-05-16 09:31:57.825626] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:04.518 [2024-05-16 09:31:57.825732] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:19:04.518 [2024-05-16 09:31:57.825736] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:04.518 [2024-05-16 09:31:57.825741] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:19:04.518 [2024-05-16 09:31:57.826621] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:19:04.518 [2024-05-16 09:31:57.827626] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:19:04.518 [2024-05-16 09:31:57.828631] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:04.518 [2024-05-16 09:31:57.829633] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:04.518 [2024-05-16 09:31:57.829685] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:04.518 [2024-05-16 09:31:57.830645] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:19:04.518 [2024-05-16 09:31:57.830653] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:04.518 [2024-05-16 09:31:57.830658] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:19:04.518 [2024-05-16 09:31:57.830679] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:19:04.518 [2024-05-16 09:31:57.830687] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:19:04.518 [2024-05-16 09:31:57.830704] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:04.518 [2024-05-16 09:31:57.830709] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:04.518 [2024-05-16 09:31:57.830721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:04.518 [2024-05-16 09:31:57.830757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:04.518 [2024-05-16 09:31:57.830767] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:19:04.518 [2024-05-16 09:31:57.830771] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:19:04.518 [2024-05-16 09:31:57.830776] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:19:04.518 [2024-05-16 09:31:57.830781] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:04.518 [2024-05-16 09:31:57.830789] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:19:04.518 [2024-05-16 09:31:57.830793] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:19:04.518 [2024-05-16 09:31:57.830798] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:19:04.518 [2024-05-16 09:31:57.830805] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:19:04.518 [2024-05-16 09:31:57.830815] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:04.518 [2024-05-16 09:31:57.830825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:04.518 [2024-05-16 09:31:57.830835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.518 [2024-05-16 09:31:57.830843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.518 [2024-05-16 09:31:57.830852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.518 [2024-05-16 09:31:57.830860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.518 [2024-05-16 09:31:57.830864] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:04.518 [2024-05-16 09:31:57.830873] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:04.518 [2024-05-16 09:31:57.830882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:04.518 [2024-05-16 09:31:57.830891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:04.518 [2024-05-16 09:31:57.830896] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:19:04.518 [2024-05-16 09:31:57.830901] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:04.518 [2024-05-16 09:31:57.830908] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:19:04.518 [2024-05-16 09:31:57.830913] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:04.518 [2024-05-16 09:31:57.830922] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:04.518 [2024-05-16 09:31:57.830933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:04.518 [2024-05-16 09:31:57.830982] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:19:04.518 [2024-05-16 09:31:57.830989] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:04.518 [2024-05-16 09:31:57.830997] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:04.518 [2024-05-16 09:31:57.831001] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:04.518 [2024-05-16 09:31:57.831007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:04.518 [2024-05-16 09:31:57.831020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:04.518 [2024-05-16 09:31:57.831029] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:19:04.518 [2024-05-16 09:31:57.831039] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:19:04.518 [2024-05-16 09:31:57.831047] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:19:04.518 [2024-05-16 09:31:57.831058] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:04.518 [2024-05-16 09:31:57.831063] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:04.518 [2024-05-16 09:31:57.831069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:04.518 [2024-05-16 09:31:57.831086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:04.518 [2024-05-16 09:31:57.831123] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:04.518 [2024-05-16 09:31:57.831131] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:04.518 [2024-05-16 09:31:57.831138] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:04.518 [2024-05-16 09:31:57.831142] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:04.518 [2024-05-16 09:31:57.831148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:04.518 [2024-05-16 09:31:57.831161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:04.518 [2024-05-16 09:31:57.831168] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:04.518 [2024-05-16 09:31:57.831175] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:19:04.518 [2024-05-16 09:31:57.831182] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:19:04.518 [2024-05-16 09:31:57.831188] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:04.519 [2024-05-16 09:31:57.831193] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:19:04.519 [2024-05-16 09:31:57.831198] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:19:04.519 [2024-05-16 09:31:57.831202] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:19:04.519 [2024-05-16 09:31:57.831207] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:19:04.519 [2024-05-16 09:31:57.831226] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:04.519 [2024-05-16 09:31:57.831236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:04.519 [2024-05-16 09:31:57.831247] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:04.519 [2024-05-16 09:31:57.831254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:04.519 [2024-05-16 09:31:57.831265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:04.519 [2024-05-16 09:31:57.831272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:04.519 [2024-05-16 09:31:57.831282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:04.519 [2024-05-16 09:31:57.831292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:04.519 [2024-05-16 09:31:57.831302] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:04.519 [2024-05-16 09:31:57.831307] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:04.519 [2024-05-16 09:31:57.831310] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:04.519 [2024-05-16 09:31:57.831314] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:04.519 [2024-05-16 09:31:57.831320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:04.519 [2024-05-16 09:31:57.831328] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:04.519 [2024-05-16 09:31:57.831332] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:04.519 [2024-05-16 09:31:57.831338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:04.519 [2024-05-16 09:31:57.831345] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:04.519 [2024-05-16 09:31:57.831349] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:04.519 [2024-05-16 09:31:57.831357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:04.519 [2024-05-16 09:31:57.831365] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:04.519 [2024-05-16 09:31:57.831369] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:04.519 [2024-05-16 09:31:57.831375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:04.519 [2024-05-16 09:31:57.831382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:04.519 [2024-05-16 09:31:57.831394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:04.519 [2024-05-16 09:31:57.831404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:04.519 [2024-05-16 09:31:57.831413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:04.519 ===================================================== 00:19:04.519 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:04.519 ===================================================== 00:19:04.519 Controller Capabilities/Features 00:19:04.519 ================================ 00:19:04.519 Vendor ID: 4e58 00:19:04.519 Subsystem Vendor ID: 4e58 00:19:04.519 Serial Number: SPDK1 00:19:04.519 Model Number: SPDK bdev Controller 00:19:04.519 Firmware Version: 24.05 00:19:04.519 Recommended Arb Burst: 6 00:19:04.519 IEEE OUI Identifier: 8d 6b 50 00:19:04.519 Multi-path I/O 00:19:04.519 May have multiple subsystem ports: Yes 00:19:04.519 May have multiple controllers: Yes 00:19:04.519 Associated with SR-IOV VF: No 00:19:04.519 Max Data Transfer Size: 131072 00:19:04.519 Max Number of Namespaces: 32 00:19:04.519 Max Number of I/O Queues: 127 00:19:04.519 NVMe Specification Version (VS): 1.3 00:19:04.519 NVMe Specification Version (Identify): 1.3 00:19:04.519 Maximum Queue Entries: 256 00:19:04.519 Contiguous Queues Required: Yes 00:19:04.519 Arbitration Mechanisms Supported 00:19:04.519 Weighted Round Robin: Not Supported 00:19:04.519 Vendor Specific: Not Supported 00:19:04.519 Reset Timeout: 15000 ms 00:19:04.519 Doorbell Stride: 4 bytes 00:19:04.519 NVM Subsystem Reset: Not Supported 00:19:04.519 Command Sets Supported 00:19:04.519 NVM Command Set: Supported 00:19:04.519 Boot Partition: Not Supported 00:19:04.519 Memory Page Size Minimum: 4096 bytes 00:19:04.519 Memory Page Size Maximum: 4096 bytes 00:19:04.519 Persistent Memory Region: Not Supported 00:19:04.519 Optional Asynchronous Events Supported 00:19:04.519 Namespace Attribute Notices: Supported 00:19:04.519 Firmware Activation Notices: Not Supported 00:19:04.519 ANA Change Notices: Not Supported 00:19:04.519 PLE Aggregate Log Change Notices: Not Supported 00:19:04.519 LBA Status Info Alert Notices: Not Supported 00:19:04.519 EGE Aggregate Log Change Notices: Not Supported 00:19:04.519 Normal NVM Subsystem Shutdown event: Not Supported 00:19:04.519 Zone Descriptor Change Notices: Not Supported 00:19:04.519 Discovery Log Change Notices: Not Supported 00:19:04.519 Controller Attributes 00:19:04.519 128-bit Host Identifier: Supported 00:19:04.519 Non-Operational Permissive Mode: Not Supported 00:19:04.519 NVM Sets: Not Supported 00:19:04.519 Read Recovery Levels: Not Supported 00:19:04.519 Endurance Groups: Not Supported 00:19:04.519 Predictable Latency Mode: Not Supported 00:19:04.519 Traffic Based Keep ALive: Not Supported 00:19:04.519 Namespace Granularity: Not Supported 00:19:04.519 SQ Associations: Not Supported 00:19:04.519 UUID List: Not Supported 00:19:04.519 Multi-Domain Subsystem: Not Supported 00:19:04.519 Fixed Capacity Management: Not Supported 00:19:04.519 Variable Capacity Management: Not Supported 00:19:04.519 Delete Endurance Group: Not Supported 00:19:04.519 Delete NVM Set: Not Supported 00:19:04.519 Extended LBA Formats Supported: Not Supported 00:19:04.519 Flexible Data Placement Supported: Not Supported 00:19:04.519 00:19:04.519 Controller Memory Buffer Support 00:19:04.519 ================================ 00:19:04.519 Supported: No 00:19:04.519 00:19:04.519 Persistent Memory Region Support 00:19:04.519 ================================ 00:19:04.519 Supported: No 00:19:04.519 00:19:04.519 Admin Command Set Attributes 00:19:04.519 ============================ 00:19:04.519 Security Send/Receive: Not Supported 00:19:04.519 Format NVM: Not Supported 00:19:04.519 Firmware Activate/Download: Not Supported 00:19:04.519 Namespace Management: Not Supported 00:19:04.519 Device Self-Test: Not Supported 00:19:04.519 Directives: Not Supported 00:19:04.519 NVMe-MI: Not Supported 00:19:04.519 Virtualization Management: Not Supported 00:19:04.519 Doorbell Buffer Config: Not Supported 00:19:04.519 Get LBA Status Capability: Not Supported 00:19:04.519 Command & Feature Lockdown Capability: Not Supported 00:19:04.519 Abort Command Limit: 4 00:19:04.519 Async Event Request Limit: 4 00:19:04.519 Number of Firmware Slots: N/A 00:19:04.519 Firmware Slot 1 Read-Only: N/A 00:19:04.519 Firmware Activation Without Reset: N/A 00:19:04.519 Multiple Update Detection Support: N/A 00:19:04.519 Firmware Update Granularity: No Information Provided 00:19:04.519 Per-Namespace SMART Log: No 00:19:04.519 Asymmetric Namespace Access Log Page: Not Supported 00:19:04.519 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:19:04.519 Command Effects Log Page: Supported 00:19:04.519 Get Log Page Extended Data: Supported 00:19:04.519 Telemetry Log Pages: Not Supported 00:19:04.519 Persistent Event Log Pages: Not Supported 00:19:04.519 Supported Log Pages Log Page: May Support 00:19:04.520 Commands Supported & Effects Log Page: Not Supported 00:19:04.520 Feature Identifiers & Effects Log Page:May Support 00:19:04.520 NVMe-MI Commands & Effects Log Page: May Support 00:19:04.520 Data Area 4 for Telemetry Log: Not Supported 00:19:04.520 Error Log Page Entries Supported: 128 00:19:04.520 Keep Alive: Supported 00:19:04.520 Keep Alive Granularity: 10000 ms 00:19:04.520 00:19:04.520 NVM Command Set Attributes 00:19:04.520 ========================== 00:19:04.520 Submission Queue Entry Size 00:19:04.520 Max: 64 00:19:04.520 Min: 64 00:19:04.520 Completion Queue Entry Size 00:19:04.520 Max: 16 00:19:04.520 Min: 16 00:19:04.520 Number of Namespaces: 32 00:19:04.520 Compare Command: Supported 00:19:04.520 Write Uncorrectable Command: Not Supported 00:19:04.520 Dataset Management Command: Supported 00:19:04.520 Write Zeroes Command: Supported 00:19:04.520 Set Features Save Field: Not Supported 00:19:04.520 Reservations: Not Supported 00:19:04.520 Timestamp: Not Supported 00:19:04.520 Copy: Supported 00:19:04.520 Volatile Write Cache: Present 00:19:04.520 Atomic Write Unit (Normal): 1 00:19:04.520 Atomic Write Unit (PFail): 1 00:19:04.520 Atomic Compare & Write Unit: 1 00:19:04.520 Fused Compare & Write: Supported 00:19:04.520 Scatter-Gather List 00:19:04.520 SGL Command Set: Supported (Dword aligned) 00:19:04.520 SGL Keyed: Not Supported 00:19:04.520 SGL Bit Bucket Descriptor: Not Supported 00:19:04.520 SGL Metadata Pointer: Not Supported 00:19:04.520 Oversized SGL: Not Supported 00:19:04.520 SGL Metadata Address: Not Supported 00:19:04.520 SGL Offset: Not Supported 00:19:04.520 Transport SGL Data Block: Not Supported 00:19:04.520 Replay Protected Memory Block: Not Supported 00:19:04.520 00:19:04.520 Firmware Slot Information 00:19:04.520 ========================= 00:19:04.520 Active slot: 1 00:19:04.520 Slot 1 Firmware Revision: 24.05 00:19:04.520 00:19:04.520 00:19:04.520 Commands Supported and Effects 00:19:04.520 ============================== 00:19:04.520 Admin Commands 00:19:04.520 -------------- 00:19:04.520 Get Log Page (02h): Supported 00:19:04.520 Identify (06h): Supported 00:19:04.520 Abort (08h): Supported 00:19:04.520 Set Features (09h): Supported 00:19:04.520 Get Features (0Ah): Supported 00:19:04.520 Asynchronous Event Request (0Ch): Supported 00:19:04.520 Keep Alive (18h): Supported 00:19:04.520 I/O Commands 00:19:04.520 ------------ 00:19:04.520 Flush (00h): Supported LBA-Change 00:19:04.520 Write (01h): Supported LBA-Change 00:19:04.520 Read (02h): Supported 00:19:04.520 Compare (05h): Supported 00:19:04.520 Write Zeroes (08h): Supported LBA-Change 00:19:04.520 Dataset Management (09h): Supported LBA-Change 00:19:04.520 Copy (19h): Supported LBA-Change 00:19:04.520 Unknown (79h): Supported LBA-Change 00:19:04.520 Unknown (7Ah): Supported 00:19:04.520 00:19:04.520 Error Log 00:19:04.520 ========= 00:19:04.520 00:19:04.520 Arbitration 00:19:04.520 =========== 00:19:04.520 Arbitration Burst: 1 00:19:04.520 00:19:04.520 Power Management 00:19:04.520 ================ 00:19:04.520 Number of Power States: 1 00:19:04.520 Current Power State: Power State #0 00:19:04.520 Power State #0: 00:19:04.520 Max Power: 0.00 W 00:19:04.520 Non-Operational State: Operational 00:19:04.520 Entry Latency: Not Reported 00:19:04.520 Exit Latency: Not Reported 00:19:04.520 Relative Read Throughput: 0 00:19:04.520 Relative Read Latency: 0 00:19:04.520 Relative Write Throughput: 0 00:19:04.520 Relative Write Latency: 0 00:19:04.520 Idle Power: Not Reported 00:19:04.520 Active Power: Not Reported 00:19:04.520 Non-Operational Permissive Mode: Not Supported 00:19:04.520 00:19:04.520 Health Information 00:19:04.520 ================== 00:19:04.520 Critical Warnings: 00:19:04.520 Available Spare Space: OK 00:19:04.520 Temperature: OK 00:19:04.520 Device Reliability: OK 00:19:04.520 Read Only: No 00:19:04.520 Volatile Memory Backup: OK 00:19:04.520 Current Temperature: 0 Kelvin (-2[2024-05-16 09:31:57.831513] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:04.520 [2024-05-16 09:31:57.831521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:04.520 [2024-05-16 09:31:57.831546] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:19:04.520 [2024-05-16 09:31:57.831555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.520 [2024-05-16 09:31:57.831561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.520 [2024-05-16 09:31:57.831568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.520 [2024-05-16 09:31:57.831574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.520 [2024-05-16 09:31:57.831651] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:04.520 [2024-05-16 09:31:57.831661] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:19:04.520 [2024-05-16 09:31:57.832649] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:04.520 [2024-05-16 09:31:57.832688] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:19:04.520 [2024-05-16 09:31:57.832695] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:19:04.520 [2024-05-16 09:31:57.833657] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:19:04.520 [2024-05-16 09:31:57.833669] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:19:04.520 [2024-05-16 09:31:57.833729] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:19:04.520 [2024-05-16 09:31:57.837062] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:04.520 73 Celsius) 00:19:04.520 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:04.520 Available Spare: 0% 00:19:04.520 Available Spare Threshold: 0% 00:19:04.520 Life Percentage Used: 0% 00:19:04.520 Data Units Read: 0 00:19:04.521 Data Units Written: 0 00:19:04.521 Host Read Commands: 0 00:19:04.521 Host Write Commands: 0 00:19:04.521 Controller Busy Time: 0 minutes 00:19:04.521 Power Cycles: 0 00:19:04.521 Power On Hours: 0 hours 00:19:04.521 Unsafe Shutdowns: 0 00:19:04.521 Unrecoverable Media Errors: 0 00:19:04.521 Lifetime Error Log Entries: 0 00:19:04.521 Warning Temperature Time: 0 minutes 00:19:04.521 Critical Temperature Time: 0 minutes 00:19:04.521 00:19:04.521 Number of Queues 00:19:04.521 ================ 00:19:04.521 Number of I/O Submission Queues: 127 00:19:04.521 Number of I/O Completion Queues: 127 00:19:04.521 00:19:04.521 Active Namespaces 00:19:04.521 ================= 00:19:04.521 Namespace ID:1 00:19:04.521 Error Recovery Timeout: Unlimited 00:19:04.521 Command Set Identifier: NVM (00h) 00:19:04.521 Deallocate: Supported 00:19:04.521 Deallocated/Unwritten Error: Not Supported 00:19:04.521 Deallocated Read Value: Unknown 00:19:04.521 Deallocate in Write Zeroes: Not Supported 00:19:04.521 Deallocated Guard Field: 0xFFFF 00:19:04.521 Flush: Supported 00:19:04.521 Reservation: Supported 00:19:04.521 Namespace Sharing Capabilities: Multiple Controllers 00:19:04.521 Size (in LBAs): 131072 (0GiB) 00:19:04.521 Capacity (in LBAs): 131072 (0GiB) 00:19:04.521 Utilization (in LBAs): 131072 (0GiB) 00:19:04.521 NGUID: 25F1FE840ED64AA083D65651F7A50E73 00:19:04.521 UUID: 25f1fe84-0ed6-4aa0-83d6-5651f7a50e73 00:19:04.521 Thin Provisioning: Not Supported 00:19:04.521 Per-NS Atomic Units: Yes 00:19:04.521 Atomic Boundary Size (Normal): 0 00:19:04.521 Atomic Boundary Size (PFail): 0 00:19:04.521 Atomic Boundary Offset: 0 00:19:04.521 Maximum Single Source Range Length: 65535 00:19:04.521 Maximum Copy Length: 65535 00:19:04.521 Maximum Source Range Count: 1 00:19:04.521 NGUID/EUI64 Never Reused: No 00:19:04.521 Namespace Write Protected: No 00:19:04.521 Number of LBA Formats: 1 00:19:04.521 Current LBA Format: LBA Format #00 00:19:04.521 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:04.521 00:19:04.521 09:31:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:04.521 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.521 [2024-05-16 09:31:58.021676] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:09.809 Initializing NVMe Controllers 00:19:09.809 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:09.809 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:09.809 Initialization complete. Launching workers. 00:19:09.809 ======================================================== 00:19:09.809 Latency(us) 00:19:09.809 Device Information : IOPS MiB/s Average min max 00:19:09.809 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40055.61 156.47 3195.21 835.81 6814.03 00:19:09.809 ======================================================== 00:19:09.809 Total : 40055.61 156.47 3195.21 835.81 6814.03 00:19:09.809 00:19:09.809 [2024-05-16 09:32:03.039495] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:09.809 09:32:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:09.809 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.809 [2024-05-16 09:32:03.215375] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:15.090 Initializing NVMe Controllers 00:19:15.090 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:15.090 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:15.090 Initialization complete. Launching workers. 00:19:15.090 ======================================================== 00:19:15.090 Latency(us) 00:19:15.090 Device Information : IOPS MiB/s Average min max 00:19:15.090 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.74 7625.22 8021.19 00:19:15.090 ======================================================== 00:19:15.090 Total : 16051.20 62.70 7980.74 7625.22 8021.19 00:19:15.090 00:19:15.090 [2024-05-16 09:32:08.251598] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:15.090 09:32:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:15.090 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.090 [2024-05-16 09:32:08.430395] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:20.379 [2024-05-16 09:32:13.514322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:20.379 Initializing NVMe Controllers 00:19:20.379 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:20.379 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:20.379 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:19:20.379 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:19:20.379 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:19:20.379 Initialization complete. Launching workers. 00:19:20.379 Starting thread on core 2 00:19:20.379 Starting thread on core 3 00:19:20.379 Starting thread on core 1 00:19:20.379 09:32:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:19:20.379 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.379 [2024-05-16 09:32:13.779462] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:23.680 [2024-05-16 09:32:16.848298] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:23.680 Initializing NVMe Controllers 00:19:23.680 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:23.680 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:23.680 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:23.680 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:23.680 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:23.680 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:23.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:23.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:23.680 Initialization complete. Launching workers. 00:19:23.680 Starting thread on core 1 with urgent priority queue 00:19:23.680 Starting thread on core 2 with urgent priority queue 00:19:23.680 Starting thread on core 3 with urgent priority queue 00:19:23.680 Starting thread on core 0 with urgent priority queue 00:19:23.680 SPDK bdev Controller (SPDK1 ) core 0: 13109.67 IO/s 7.63 secs/100000 ios 00:19:23.680 SPDK bdev Controller (SPDK1 ) core 1: 13517.33 IO/s 7.40 secs/100000 ios 00:19:23.680 SPDK bdev Controller (SPDK1 ) core 2: 7780.67 IO/s 12.85 secs/100000 ios 00:19:23.680 SPDK bdev Controller (SPDK1 ) core 3: 11652.00 IO/s 8.58 secs/100000 ios 00:19:23.680 ======================================================== 00:19:23.680 00:19:23.680 09:32:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:23.680 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.680 [2024-05-16 09:32:17.102820] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:23.680 Initializing NVMe Controllers 00:19:23.680 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:23.680 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:23.680 Namespace ID: 1 size: 0GB 00:19:23.680 Initialization complete. 00:19:23.680 INFO: using host memory buffer for IO 00:19:23.680 Hello world! 00:19:23.680 [2024-05-16 09:32:17.137006] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:23.680 09:32:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:23.680 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.940 [2024-05-16 09:32:17.397561] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:24.883 Initializing NVMe Controllers 00:19:24.883 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:24.883 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:24.883 Initialization complete. Launching workers. 00:19:24.883 submit (in ns) avg, min, max = 7040.5, 3928.3, 4002241.7 00:19:24.883 complete (in ns) avg, min, max = 17931.2, 2393.3, 4001319.2 00:19:24.883 00:19:24.883 Submit histogram 00:19:24.883 ================ 00:19:24.883 Range in us Cumulative Count 00:19:24.883 3.920 - 3.947: 1.2075% ( 237) 00:19:24.883 3.947 - 3.973: 6.9696% ( 1131) 00:19:24.883 3.973 - 4.000: 17.1184% ( 1992) 00:19:24.883 4.000 - 4.027: 28.7905% ( 2291) 00:19:24.883 4.027 - 4.053: 39.3876% ( 2080) 00:19:24.883 4.053 - 4.080: 49.8472% ( 2053) 00:19:24.883 4.080 - 4.107: 66.8840% ( 3344) 00:19:24.883 4.107 - 4.133: 81.7098% ( 2910) 00:19:24.883 4.133 - 4.160: 91.7261% ( 1966) 00:19:24.883 4.160 - 4.187: 96.7699% ( 990) 00:19:24.883 4.187 - 4.213: 98.5021% ( 340) 00:19:24.883 4.213 - 4.240: 99.1339% ( 124) 00:19:24.883 4.240 - 4.267: 99.3020% ( 33) 00:19:24.883 4.267 - 4.293: 99.3326% ( 6) 00:19:24.883 4.293 - 4.320: 99.3377% ( 1) 00:19:24.883 4.320 - 4.347: 99.3428% ( 1) 00:19:24.883 4.427 - 4.453: 99.3581% ( 3) 00:19:24.883 4.453 - 4.480: 99.3632% ( 1) 00:19:24.883 4.480 - 4.507: 99.3682% ( 1) 00:19:24.883 4.533 - 4.560: 99.3733% ( 1) 00:19:24.883 4.560 - 4.587: 99.3784% ( 1) 00:19:24.883 4.587 - 4.613: 99.3835% ( 1) 00:19:24.883 4.613 - 4.640: 99.3886% ( 1) 00:19:24.883 4.667 - 4.693: 99.4090% ( 4) 00:19:24.883 4.693 - 4.720: 99.4192% ( 2) 00:19:24.883 4.720 - 4.747: 99.4294% ( 2) 00:19:24.884 4.747 - 4.773: 99.4345% ( 1) 00:19:24.884 4.800 - 4.827: 99.4447% ( 2) 00:19:24.884 4.853 - 4.880: 99.4600% ( 3) 00:19:24.884 4.933 - 4.960: 99.4650% ( 1) 00:19:24.884 4.960 - 4.987: 99.4701% ( 1) 00:19:24.884 5.040 - 5.067: 99.4854% ( 3) 00:19:24.884 5.093 - 5.120: 99.4956% ( 2) 00:19:24.884 5.120 - 5.147: 99.5007% ( 1) 00:19:24.884 5.147 - 5.173: 99.5109% ( 2) 00:19:24.884 5.173 - 5.200: 99.5160% ( 1) 00:19:24.884 5.227 - 5.253: 99.5211% ( 1) 00:19:24.884 5.253 - 5.280: 99.5262% ( 1) 00:19:24.884 5.387 - 5.413: 99.5313% ( 1) 00:19:24.884 5.413 - 5.440: 99.5364% ( 1) 00:19:24.884 5.520 - 5.547: 99.5415% ( 1) 00:19:24.884 5.547 - 5.573: 99.5466% ( 1) 00:19:24.884 5.600 - 5.627: 99.5517% ( 1) 00:19:24.884 5.733 - 5.760: 99.5568% ( 1) 00:19:24.884 5.760 - 5.787: 99.5619% ( 1) 00:19:24.884 5.787 - 5.813: 99.5669% ( 1) 00:19:24.884 5.840 - 5.867: 99.5720% ( 1) 00:19:24.884 5.920 - 5.947: 99.5771% ( 1) 00:19:24.884 5.973 - 6.000: 99.5822% ( 1) 00:19:24.884 6.080 - 6.107: 99.5975% ( 3) 00:19:24.884 6.107 - 6.133: 99.6026% ( 1) 00:19:24.884 6.133 - 6.160: 99.6077% ( 1) 00:19:24.884 6.187 - 6.213: 99.6179% ( 2) 00:19:24.884 6.267 - 6.293: 99.6281% ( 2) 00:19:24.884 6.293 - 6.320: 99.6383% ( 2) 00:19:24.884 6.347 - 6.373: 99.6434% ( 1) 00:19:24.884 6.373 - 6.400: 99.6485% ( 1) 00:19:24.884 6.427 - 6.453: 99.6536% ( 1) 00:19:24.884 6.453 - 6.480: 99.6688% ( 3) 00:19:24.884 6.507 - 6.533: 99.6790% ( 2) 00:19:24.884 6.533 - 6.560: 99.6841% ( 1) 00:19:24.884 6.560 - 6.587: 99.6892% ( 1) 00:19:24.884 6.587 - 6.613: 99.6943% ( 1) 00:19:24.884 6.613 - 6.640: 99.6994% ( 1) 00:19:24.884 6.640 - 6.667: 99.7045% ( 1) 00:19:24.884 6.747 - 6.773: 99.7096% ( 1) 00:19:24.884 6.827 - 6.880: 99.7147% ( 1) 00:19:24.884 6.880 - 6.933: 99.7300% ( 3) 00:19:24.884 7.200 - 7.253: 99.7402% ( 2) 00:19:24.884 7.253 - 7.307: 99.7453% ( 1) 00:19:24.884 7.307 - 7.360: 99.7555% ( 2) 00:19:24.884 7.360 - 7.413: 99.7656% ( 2) 00:19:24.884 7.467 - 7.520: 99.7707% ( 1) 00:19:24.884 7.520 - 7.573: 99.7758% ( 1) 00:19:24.884 7.573 - 7.627: 99.7809% ( 1) 00:19:24.884 7.627 - 7.680: 99.7860% ( 1) 00:19:24.884 7.680 - 7.733: 99.7911% ( 1) 00:19:24.884 7.733 - 7.787: 99.7962% ( 1) 00:19:24.884 7.787 - 7.840: 99.8013% ( 1) 00:19:24.884 7.840 - 7.893: 99.8064% ( 1) 00:19:24.884 7.893 - 7.947: 99.8115% ( 1) 00:19:24.884 7.947 - 8.000: 99.8217% ( 2) 00:19:24.884 8.053 - 8.107: 99.8268% ( 1) 00:19:24.884 8.213 - 8.267: 99.8319% ( 1) 00:19:24.884 8.267 - 8.320: 99.8370% ( 1) 00:19:24.884 8.320 - 8.373: 99.8472% ( 2) 00:19:24.884 8.373 - 8.427: 99.8523% ( 1) 00:19:24.884 8.427 - 8.480: 99.8573% ( 1) 00:19:24.884 8.480 - 8.533: 99.8624% ( 1) 00:19:24.884 8.533 - 8.587: 99.8675% ( 1) 00:19:24.884 8.640 - 8.693: 99.8777% ( 2) 00:19:24.884 8.800 - 8.853: 99.8828% ( 1) 00:19:24.884 8.853 - 8.907: 99.8879% ( 1) 00:19:24.884 8.907 - 8.960: 99.8930% ( 1) 00:19:24.884 9.067 - 9.120: 99.8981% ( 1) 00:19:24.884 9.120 - 9.173: 99.9032% ( 1) 00:19:24.884 9.333 - 9.387: 99.9083% ( 1) 00:19:24.884 9.707 - 9.760: 99.9134% ( 1) 00:19:24.884 11.360 - 11.413: 99.9185% ( 1) 00:19:24.884 15.253 - 15.360: 99.9236% ( 1) 00:19:24.884 2075.307 - 2088.960: 99.9287% ( 1) 00:19:24.884 3986.773 - 4014.080: 100.0000% ( 14) 00:19:24.884 00:19:24.884 Complete histogram 00:19:24.884 ================== 00:19:24.884 Range in us Cumulative Count 00:19:24.884 2.387 - 2.400: 2.7512% ( 540) 00:19:24.884 2.400 - 2.413: 11.9880% ( 1813) 00:19:24.884 2.413 - 2.427: 12.7063% ( 141) 00:19:24.884 2.427 - 2.440: 14.7341% ( 398) 00:19:24.884 2.440 - 2.453: 49.6485% ( 6853) 00:19:24.884 2.453 - 2.467: 62.6452% ( 2551) 00:19:24.884 2.467 - 2.480: 72.8704% ( 2007) 00:19:24.884 2.480 - 2.493: 81.2615% ( 1647) 00:19:24.884 2.493 - 2.507: 83.9617% ( 530) 00:19:24.884 2.507 - 2.520: 86.4530% ( 489) 00:19:24.884 2.520 - 2.533: 91.4204% ( 975) 00:19:24.884 2.533 - 2.547: 95.4351% ( 788) 00:19:24.884 2.547 - 2.560: 97.3813% ( 382) 00:19:24.884 2.560 - 2.573: 98.6091% ( 241) 00:19:24.884 2.573 - 2.587: 99.0269% ( 82) 00:19:24.884 2.587 - 2.600: 99.1746% ( 29) 00:19:24.884 2.600 - 2.613: 99.2001% ( 5) 00:19:24.884 2.613 - 2.627: 99.2103% ( 2) 00:19:24.884 2.627 - 2.640: 99.2154% ( 1) 00:19:24.884 2.667 - 2.680: 99.2205% ( 1) 00:19:24.884 2.680 - 2.693: 99.2256% ( 1) 00:19:24.884 2.693 - 2.707: 99.2358% ( 2) 00:19:24.884 2.747 - 2.760: 99.2409% ( 1) 00:19:24.884 2.787 - 2.800: 99.2511% ( 2) 00:19:24.884 2.800 - 2.813: 99.2613% ( 2) 00:19:24.884 2.827 - 2.840: 99.2664% ( 1) 00:19:24.884 2.840 - 2.853: 99.2714% ( 1) 00:19:24.884 2.853 - 2.867: 99.2765% ( 1) 00:19:24.884 2.867 - 2.880: 99.2816% ( 1) 00:19:24.884 2.933 - 2.947: 99.2867% ( 1) 00:19:24.884 2.973 - 2.987: 99.2918% ( 1) 00:19:24.884 2.987 - 3.000: 99.3020% ( 2) 00:19:24.884 3.000 - 3.013: 99.3122% ( 2) 00:19:24.884 3.027 - 3.040: 99.3173% ( 1) 00:19:24.884 3.053 - 3.067: 99.3275% ( 2) 00:19:24.884 3.080 - 3.093: 99.3326% ( 1) 00:19:24.884 3.120 - 3.133: 99.3377% ( 1) 00:19:24.884 3.160 - 3.173: 99.3428% ( 1) 00:19:24.884 3.200 - 3.213: 99.3479% ( 1) 00:19:24.884 3.213 - 3.227: 99.3530% ( 1) 00:19:24.884 3.253 - 3.267: 99.3581% ( 1) 00:19:24.884 4.507 - 4.533: 99.3632% ( 1) 00:19:24.884 4.773 - 4.800: 99.3682% ( 1) 00:19:24.884 4.800 - 4.827: 99.3784% ( 2) 00:19:24.884 4.827 - 4.853: 99.3835% ( 1) 00:19:24.884 4.907 - 4.933: 99.3886% ( 1) 00:19:24.884 5.013 - 5.040: 99.3937% ( 1) 00:19:24.884 5.173 - 5.200: 99.3988% ( 1) 00:19:24.884 5.200 - 5.227: 99.4039% ( 1) 00:19:24.884 5.520 - 5.547: 99.4141% ( 2) 00:19:24.884 5.547 - 5.573: 99.4192% ( 1) 00:19:24.884 5.573 - 5.600: 99.4243% ( 1) 00:19:24.884 5.627 - 5.653: 99.4447% ( 4) 00:19:24.884 5.973 - 6.000: 99.4600% ( 3) 00:19:24.884 6.000 - 6.027: 99.4650% ( 1) 00:19:24.884 6.053 - 6.080: 99.4701% ( 1) 00:19:24.884 6.133 - 6.160: 99.4752% ( 1) 00:19:24.884 6.160 - 6.187: 99.4803% ( 1) 00:19:24.884 6.213 - 6.240: 99.4854% ( 1) 00:19:24.884 6.267 - 6.293: 99.4905% ( 1) 00:19:24.884 6.320 - 6.347: 99.4956% ( 1) 00:19:24.884 6.347 - 6.373: 99.5007% ( 1) 00:19:24.884 6.373 - 6.4[2024-05-16 09:32:18.420048] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:25.145 00: 99.5058% ( 1) 00:19:25.145 6.453 - 6.480: 99.5109% ( 1) 00:19:25.145 6.480 - 6.507: 99.5211% ( 2) 00:19:25.145 6.507 - 6.533: 99.5262% ( 1) 00:19:25.145 6.587 - 6.613: 99.5313% ( 1) 00:19:25.145 6.613 - 6.640: 99.5364% ( 1) 00:19:25.145 6.693 - 6.720: 99.5415% ( 1) 00:19:25.145 6.720 - 6.747: 99.5466% ( 1) 00:19:25.145 6.773 - 6.800: 99.5517% ( 1) 00:19:25.145 6.800 - 6.827: 99.5568% ( 1) 00:19:25.145 6.827 - 6.880: 99.5619% ( 1) 00:19:25.145 7.040 - 7.093: 99.5822% ( 4) 00:19:25.145 7.147 - 7.200: 99.5873% ( 1) 00:19:25.145 7.307 - 7.360: 99.5924% ( 1) 00:19:25.145 8.160 - 8.213: 99.5975% ( 1) 00:19:25.145 8.320 - 8.373: 99.6026% ( 1) 00:19:25.145 11.093 - 11.147: 99.6077% ( 1) 00:19:25.145 14.827 - 14.933: 99.6128% ( 1) 00:19:25.145 3741.013 - 3768.320: 99.6179% ( 1) 00:19:25.145 3986.773 - 4014.080: 100.0000% ( 75) 00:19:25.145 00:19:25.145 09:32:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:25.145 09:32:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:25.145 09:32:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:25.145 09:32:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:25.145 09:32:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:25.145 [ 00:19:25.145 { 00:19:25.145 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:25.145 "subtype": "Discovery", 00:19:25.145 "listen_addresses": [], 00:19:25.145 "allow_any_host": true, 00:19:25.145 "hosts": [] 00:19:25.145 }, 00:19:25.145 { 00:19:25.145 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:25.145 "subtype": "NVMe", 00:19:25.145 "listen_addresses": [ 00:19:25.145 { 00:19:25.145 "trtype": "VFIOUSER", 00:19:25.145 "adrfam": "IPv4", 00:19:25.145 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:25.145 "trsvcid": "0" 00:19:25.145 } 00:19:25.145 ], 00:19:25.145 "allow_any_host": true, 00:19:25.145 "hosts": [], 00:19:25.145 "serial_number": "SPDK1", 00:19:25.145 "model_number": "SPDK bdev Controller", 00:19:25.145 "max_namespaces": 32, 00:19:25.145 "min_cntlid": 1, 00:19:25.145 "max_cntlid": 65519, 00:19:25.145 "namespaces": [ 00:19:25.145 { 00:19:25.145 "nsid": 1, 00:19:25.145 "bdev_name": "Malloc1", 00:19:25.145 "name": "Malloc1", 00:19:25.145 "nguid": "25F1FE840ED64AA083D65651F7A50E73", 00:19:25.145 "uuid": "25f1fe84-0ed6-4aa0-83d6-5651f7a50e73" 00:19:25.145 } 00:19:25.145 ] 00:19:25.145 }, 00:19:25.145 { 00:19:25.145 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:25.145 "subtype": "NVMe", 00:19:25.145 "listen_addresses": [ 00:19:25.145 { 00:19:25.145 "trtype": "VFIOUSER", 00:19:25.145 "adrfam": "IPv4", 00:19:25.145 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:25.145 "trsvcid": "0" 00:19:25.145 } 00:19:25.145 ], 00:19:25.145 "allow_any_host": true, 00:19:25.145 "hosts": [], 00:19:25.145 "serial_number": "SPDK2", 00:19:25.145 "model_number": "SPDK bdev Controller", 00:19:25.145 "max_namespaces": 32, 00:19:25.145 "min_cntlid": 1, 00:19:25.145 "max_cntlid": 65519, 00:19:25.145 "namespaces": [ 00:19:25.145 { 00:19:25.145 "nsid": 1, 00:19:25.145 "bdev_name": "Malloc2", 00:19:25.145 "name": "Malloc2", 00:19:25.145 "nguid": "072FF3AEF7A84763B93F09413AB68E30", 00:19:25.145 "uuid": "072ff3ae-f7a8-4763-b93f-09413ab68e30" 00:19:25.145 } 00:19:25.145 ] 00:19:25.145 } 00:19:25.145 ] 00:19:25.145 09:32:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:25.145 09:32:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=258842 00:19:25.145 09:32:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:25.145 09:32:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:25.145 09:32:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:19:25.145 09:32:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:25.145 09:32:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:19:25.145 09:32:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=1 00:19:25.145 09:32:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:19:25.145 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.406 09:32:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:25.406 09:32:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:19:25.406 09:32:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=2 00:19:25.406 09:32:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:19:25.406 [2024-05-16 09:32:18.807927] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:25.406 09:32:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:25.406 09:32:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:25.406 09:32:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:19:25.406 09:32:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:25.406 09:32:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:25.666 Malloc3 00:19:25.666 09:32:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:25.666 [2024-05-16 09:32:19.185459] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:25.666 09:32:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:25.929 Asynchronous Event Request test 00:19:25.929 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:25.929 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:25.929 Registering asynchronous event callbacks... 00:19:25.929 Starting namespace attribute notice tests for all controllers... 00:19:25.929 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:25.929 aer_cb - Changed Namespace 00:19:25.929 Cleaning up... 00:19:25.929 [ 00:19:25.929 { 00:19:25.929 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:25.929 "subtype": "Discovery", 00:19:25.929 "listen_addresses": [], 00:19:25.929 "allow_any_host": true, 00:19:25.929 "hosts": [] 00:19:25.929 }, 00:19:25.929 { 00:19:25.929 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:25.929 "subtype": "NVMe", 00:19:25.929 "listen_addresses": [ 00:19:25.929 { 00:19:25.929 "trtype": "VFIOUSER", 00:19:25.929 "adrfam": "IPv4", 00:19:25.929 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:25.929 "trsvcid": "0" 00:19:25.929 } 00:19:25.929 ], 00:19:25.929 "allow_any_host": true, 00:19:25.929 "hosts": [], 00:19:25.929 "serial_number": "SPDK1", 00:19:25.929 "model_number": "SPDK bdev Controller", 00:19:25.929 "max_namespaces": 32, 00:19:25.929 "min_cntlid": 1, 00:19:25.929 "max_cntlid": 65519, 00:19:25.929 "namespaces": [ 00:19:25.929 { 00:19:25.929 "nsid": 1, 00:19:25.929 "bdev_name": "Malloc1", 00:19:25.929 "name": "Malloc1", 00:19:25.929 "nguid": "25F1FE840ED64AA083D65651F7A50E73", 00:19:25.929 "uuid": "25f1fe84-0ed6-4aa0-83d6-5651f7a50e73" 00:19:25.929 }, 00:19:25.929 { 00:19:25.929 "nsid": 2, 00:19:25.929 "bdev_name": "Malloc3", 00:19:25.929 "name": "Malloc3", 00:19:25.929 "nguid": "D8021D5576CF438E95EFD8D08E3DA23B", 00:19:25.929 "uuid": "d8021d55-76cf-438e-95ef-d8d08e3da23b" 00:19:25.929 } 00:19:25.929 ] 00:19:25.929 }, 00:19:25.929 { 00:19:25.929 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:25.929 "subtype": "NVMe", 00:19:25.929 "listen_addresses": [ 00:19:25.929 { 00:19:25.929 "trtype": "VFIOUSER", 00:19:25.929 "adrfam": "IPv4", 00:19:25.929 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:25.929 "trsvcid": "0" 00:19:25.929 } 00:19:25.929 ], 00:19:25.929 "allow_any_host": true, 00:19:25.929 "hosts": [], 00:19:25.929 "serial_number": "SPDK2", 00:19:25.929 "model_number": "SPDK bdev Controller", 00:19:25.929 "max_namespaces": 32, 00:19:25.929 "min_cntlid": 1, 00:19:25.929 "max_cntlid": 65519, 00:19:25.929 "namespaces": [ 00:19:25.929 { 00:19:25.929 "nsid": 1, 00:19:25.929 "bdev_name": "Malloc2", 00:19:25.929 "name": "Malloc2", 00:19:25.929 "nguid": "072FF3AEF7A84763B93F09413AB68E30", 00:19:25.929 "uuid": "072ff3ae-f7a8-4763-b93f-09413ab68e30" 00:19:25.929 } 00:19:25.929 ] 00:19:25.929 } 00:19:25.929 ] 00:19:25.929 09:32:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 258842 00:19:25.929 09:32:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:25.929 09:32:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:25.929 09:32:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:25.929 09:32:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:25.929 [2024-05-16 09:32:19.389899] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:19:25.929 [2024-05-16 09:32:19.389937] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258937 ] 00:19:25.929 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.929 [2024-05-16 09:32:19.419582] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:25.929 [2024-05-16 09:32:19.432794] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:25.929 [2024-05-16 09:32:19.432814] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f82769b5000 00:19:25.929 [2024-05-16 09:32:19.433789] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:25.929 [2024-05-16 09:32:19.434794] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:25.929 [2024-05-16 09:32:19.435799] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:25.929 [2024-05-16 09:32:19.436810] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:25.929 [2024-05-16 09:32:19.437824] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:25.929 [2024-05-16 09:32:19.438824] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:25.929 [2024-05-16 09:32:19.439829] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:25.929 [2024-05-16 09:32:19.440828] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:25.929 [2024-05-16 09:32:19.441836] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:25.929 [2024-05-16 09:32:19.441848] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f82769aa000 00:19:25.929 [2024-05-16 09:32:19.443178] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:25.929 [2024-05-16 09:32:19.460383] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:25.929 [2024-05-16 09:32:19.460406] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:19:25.929 [2024-05-16 09:32:19.462472] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:25.929 [2024-05-16 09:32:19.462516] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:25.929 [2024-05-16 09:32:19.462595] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:19:25.929 [2024-05-16 09:32:19.462612] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:19:25.929 [2024-05-16 09:32:19.462618] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:19:25.929 [2024-05-16 09:32:19.463472] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:25.930 [2024-05-16 09:32:19.463481] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:19:25.930 [2024-05-16 09:32:19.463489] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:19:25.930 [2024-05-16 09:32:19.464478] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:25.930 [2024-05-16 09:32:19.464488] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:19:25.930 [2024-05-16 09:32:19.464496] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:19:25.930 [2024-05-16 09:32:19.465481] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:25.930 [2024-05-16 09:32:19.465489] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:25.930 [2024-05-16 09:32:19.468057] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:25.930 [2024-05-16 09:32:19.468065] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:19:25.930 [2024-05-16 09:32:19.468070] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:19:25.930 [2024-05-16 09:32:19.468077] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:25.930 [2024-05-16 09:32:19.468183] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:19:25.930 [2024-05-16 09:32:19.468187] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:25.930 [2024-05-16 09:32:19.468192] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:25.930 [2024-05-16 09:32:19.468507] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:25.930 [2024-05-16 09:32:19.469511] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:25.930 [2024-05-16 09:32:19.470522] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:25.930 [2024-05-16 09:32:19.471529] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:25.930 [2024-05-16 09:32:19.471570] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:25.930 [2024-05-16 09:32:19.472543] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:25.930 [2024-05-16 09:32:19.472552] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:25.930 [2024-05-16 09:32:19.472557] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:19:25.930 [2024-05-16 09:32:19.472578] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:19:25.930 [2024-05-16 09:32:19.472589] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:19:25.930 [2024-05-16 09:32:19.472603] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:25.930 [2024-05-16 09:32:19.472607] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:25.930 [2024-05-16 09:32:19.472619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:25.930 [2024-05-16 09:32:19.479060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:25.930 [2024-05-16 09:32:19.479071] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:19:25.930 [2024-05-16 09:32:19.479076] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:19:25.930 [2024-05-16 09:32:19.479080] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:19:25.930 [2024-05-16 09:32:19.479084] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:25.930 [2024-05-16 09:32:19.479091] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:19:25.930 [2024-05-16 09:32:19.479096] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:19:25.930 [2024-05-16 09:32:19.479101] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:19:25.930 [2024-05-16 09:32:19.479108] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:19:25.930 [2024-05-16 09:32:19.479118] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:25.930 [2024-05-16 09:32:19.487058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:25.930 [2024-05-16 09:32:19.487071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.930 [2024-05-16 09:32:19.487079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.930 [2024-05-16 09:32:19.487087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.930 [2024-05-16 09:32:19.487096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.930 [2024-05-16 09:32:19.487100] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:19:25.930 [2024-05-16 09:32:19.487111] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:25.930 [2024-05-16 09:32:19.487121] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:26.194 [2024-05-16 09:32:19.495058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:26.194 [2024-05-16 09:32:19.495066] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:19:26.194 [2024-05-16 09:32:19.495071] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:26.194 [2024-05-16 09:32:19.495078] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:19:26.194 [2024-05-16 09:32:19.495084] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:19:26.194 [2024-05-16 09:32:19.495093] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:26.194 [2024-05-16 09:32:19.503058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:26.194 [2024-05-16 09:32:19.503110] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:19:26.194 [2024-05-16 09:32:19.503119] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:19:26.194 [2024-05-16 09:32:19.503126] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:26.194 [2024-05-16 09:32:19.503131] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:26.194 [2024-05-16 09:32:19.503137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:26.194 [2024-05-16 09:32:19.511059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:26.194 [2024-05-16 09:32:19.511069] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:19:26.194 [2024-05-16 09:32:19.511078] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:19:26.194 [2024-05-16 09:32:19.511085] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:19:26.194 [2024-05-16 09:32:19.511092] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:26.194 [2024-05-16 09:32:19.511096] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:26.194 [2024-05-16 09:32:19.511103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:26.194 [2024-05-16 09:32:19.519058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:26.194 [2024-05-16 09:32:19.519072] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:26.194 [2024-05-16 09:32:19.519080] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:26.194 [2024-05-16 09:32:19.519087] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:26.194 [2024-05-16 09:32:19.519095] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:26.194 [2024-05-16 09:32:19.519102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:26.194 [2024-05-16 09:32:19.527058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:26.194 [2024-05-16 09:32:19.527067] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:26.194 [2024-05-16 09:32:19.527074] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:19:26.194 [2024-05-16 09:32:19.527082] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:19:26.194 [2024-05-16 09:32:19.527088] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:26.194 [2024-05-16 09:32:19.527093] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:19:26.194 [2024-05-16 09:32:19.527098] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:19:26.194 [2024-05-16 09:32:19.527102] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:19:26.194 [2024-05-16 09:32:19.527107] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:19:26.194 [2024-05-16 09:32:19.527125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:26.194 [2024-05-16 09:32:19.535060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:26.194 [2024-05-16 09:32:19.535074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:26.194 [2024-05-16 09:32:19.543057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:26.194 [2024-05-16 09:32:19.543077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:26.194 [2024-05-16 09:32:19.551058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:26.194 [2024-05-16 09:32:19.551071] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:26.194 [2024-05-16 09:32:19.559057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:26.194 [2024-05-16 09:32:19.559070] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:26.194 [2024-05-16 09:32:19.559075] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:26.194 [2024-05-16 09:32:19.559079] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:26.194 [2024-05-16 09:32:19.559082] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:26.194 [2024-05-16 09:32:19.559089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:26.194 [2024-05-16 09:32:19.559096] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:26.194 [2024-05-16 09:32:19.559100] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:26.194 [2024-05-16 09:32:19.559106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:26.194 [2024-05-16 09:32:19.559116] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:26.194 [2024-05-16 09:32:19.559120] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:26.194 [2024-05-16 09:32:19.559126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:26.194 [2024-05-16 09:32:19.559133] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:26.194 [2024-05-16 09:32:19.559138] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:26.194 [2024-05-16 09:32:19.559143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:26.194 [2024-05-16 09:32:19.567057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:26.194 [2024-05-16 09:32:19.567071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:26.194 [2024-05-16 09:32:19.567080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:26.194 [2024-05-16 09:32:19.567089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:26.194 ===================================================== 00:19:26.194 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:26.194 ===================================================== 00:19:26.194 Controller Capabilities/Features 00:19:26.194 ================================ 00:19:26.194 Vendor ID: 4e58 00:19:26.194 Subsystem Vendor ID: 4e58 00:19:26.194 Serial Number: SPDK2 00:19:26.195 Model Number: SPDK bdev Controller 00:19:26.195 Firmware Version: 24.05 00:19:26.195 Recommended Arb Burst: 6 00:19:26.195 IEEE OUI Identifier: 8d 6b 50 00:19:26.195 Multi-path I/O 00:19:26.195 May have multiple subsystem ports: Yes 00:19:26.195 May have multiple controllers: Yes 00:19:26.195 Associated with SR-IOV VF: No 00:19:26.195 Max Data Transfer Size: 131072 00:19:26.195 Max Number of Namespaces: 32 00:19:26.195 Max Number of I/O Queues: 127 00:19:26.195 NVMe Specification Version (VS): 1.3 00:19:26.195 NVMe Specification Version (Identify): 1.3 00:19:26.195 Maximum Queue Entries: 256 00:19:26.195 Contiguous Queues Required: Yes 00:19:26.195 Arbitration Mechanisms Supported 00:19:26.195 Weighted Round Robin: Not Supported 00:19:26.195 Vendor Specific: Not Supported 00:19:26.195 Reset Timeout: 15000 ms 00:19:26.195 Doorbell Stride: 4 bytes 00:19:26.195 NVM Subsystem Reset: Not Supported 00:19:26.195 Command Sets Supported 00:19:26.195 NVM Command Set: Supported 00:19:26.195 Boot Partition: Not Supported 00:19:26.195 Memory Page Size Minimum: 4096 bytes 00:19:26.195 Memory Page Size Maximum: 4096 bytes 00:19:26.195 Persistent Memory Region: Not Supported 00:19:26.195 Optional Asynchronous Events Supported 00:19:26.195 Namespace Attribute Notices: Supported 00:19:26.195 Firmware Activation Notices: Not Supported 00:19:26.195 ANA Change Notices: Not Supported 00:19:26.195 PLE Aggregate Log Change Notices: Not Supported 00:19:26.195 LBA Status Info Alert Notices: Not Supported 00:19:26.195 EGE Aggregate Log Change Notices: Not Supported 00:19:26.195 Normal NVM Subsystem Shutdown event: Not Supported 00:19:26.195 Zone Descriptor Change Notices: Not Supported 00:19:26.195 Discovery Log Change Notices: Not Supported 00:19:26.195 Controller Attributes 00:19:26.195 128-bit Host Identifier: Supported 00:19:26.195 Non-Operational Permissive Mode: Not Supported 00:19:26.195 NVM Sets: Not Supported 00:19:26.195 Read Recovery Levels: Not Supported 00:19:26.195 Endurance Groups: Not Supported 00:19:26.195 Predictable Latency Mode: Not Supported 00:19:26.195 Traffic Based Keep ALive: Not Supported 00:19:26.195 Namespace Granularity: Not Supported 00:19:26.195 SQ Associations: Not Supported 00:19:26.195 UUID List: Not Supported 00:19:26.195 Multi-Domain Subsystem: Not Supported 00:19:26.195 Fixed Capacity Management: Not Supported 00:19:26.195 Variable Capacity Management: Not Supported 00:19:26.195 Delete Endurance Group: Not Supported 00:19:26.195 Delete NVM Set: Not Supported 00:19:26.195 Extended LBA Formats Supported: Not Supported 00:19:26.195 Flexible Data Placement Supported: Not Supported 00:19:26.195 00:19:26.195 Controller Memory Buffer Support 00:19:26.195 ================================ 00:19:26.195 Supported: No 00:19:26.195 00:19:26.195 Persistent Memory Region Support 00:19:26.195 ================================ 00:19:26.195 Supported: No 00:19:26.195 00:19:26.195 Admin Command Set Attributes 00:19:26.195 ============================ 00:19:26.195 Security Send/Receive: Not Supported 00:19:26.195 Format NVM: Not Supported 00:19:26.195 Firmware Activate/Download: Not Supported 00:19:26.195 Namespace Management: Not Supported 00:19:26.195 Device Self-Test: Not Supported 00:19:26.195 Directives: Not Supported 00:19:26.195 NVMe-MI: Not Supported 00:19:26.195 Virtualization Management: Not Supported 00:19:26.195 Doorbell Buffer Config: Not Supported 00:19:26.195 Get LBA Status Capability: Not Supported 00:19:26.195 Command & Feature Lockdown Capability: Not Supported 00:19:26.195 Abort Command Limit: 4 00:19:26.195 Async Event Request Limit: 4 00:19:26.195 Number of Firmware Slots: N/A 00:19:26.195 Firmware Slot 1 Read-Only: N/A 00:19:26.195 Firmware Activation Without Reset: N/A 00:19:26.195 Multiple Update Detection Support: N/A 00:19:26.195 Firmware Update Granularity: No Information Provided 00:19:26.195 Per-Namespace SMART Log: No 00:19:26.195 Asymmetric Namespace Access Log Page: Not Supported 00:19:26.195 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:26.195 Command Effects Log Page: Supported 00:19:26.195 Get Log Page Extended Data: Supported 00:19:26.195 Telemetry Log Pages: Not Supported 00:19:26.195 Persistent Event Log Pages: Not Supported 00:19:26.195 Supported Log Pages Log Page: May Support 00:19:26.195 Commands Supported & Effects Log Page: Not Supported 00:19:26.195 Feature Identifiers & Effects Log Page:May Support 00:19:26.195 NVMe-MI Commands & Effects Log Page: May Support 00:19:26.195 Data Area 4 for Telemetry Log: Not Supported 00:19:26.195 Error Log Page Entries Supported: 128 00:19:26.195 Keep Alive: Supported 00:19:26.195 Keep Alive Granularity: 10000 ms 00:19:26.195 00:19:26.195 NVM Command Set Attributes 00:19:26.195 ========================== 00:19:26.195 Submission Queue Entry Size 00:19:26.195 Max: 64 00:19:26.195 Min: 64 00:19:26.195 Completion Queue Entry Size 00:19:26.195 Max: 16 00:19:26.195 Min: 16 00:19:26.195 Number of Namespaces: 32 00:19:26.195 Compare Command: Supported 00:19:26.195 Write Uncorrectable Command: Not Supported 00:19:26.195 Dataset Management Command: Supported 00:19:26.195 Write Zeroes Command: Supported 00:19:26.195 Set Features Save Field: Not Supported 00:19:26.195 Reservations: Not Supported 00:19:26.195 Timestamp: Not Supported 00:19:26.195 Copy: Supported 00:19:26.195 Volatile Write Cache: Present 00:19:26.195 Atomic Write Unit (Normal): 1 00:19:26.195 Atomic Write Unit (PFail): 1 00:19:26.195 Atomic Compare & Write Unit: 1 00:19:26.195 Fused Compare & Write: Supported 00:19:26.195 Scatter-Gather List 00:19:26.195 SGL Command Set: Supported (Dword aligned) 00:19:26.195 SGL Keyed: Not Supported 00:19:26.195 SGL Bit Bucket Descriptor: Not Supported 00:19:26.195 SGL Metadata Pointer: Not Supported 00:19:26.195 Oversized SGL: Not Supported 00:19:26.195 SGL Metadata Address: Not Supported 00:19:26.195 SGL Offset: Not Supported 00:19:26.195 Transport SGL Data Block: Not Supported 00:19:26.195 Replay Protected Memory Block: Not Supported 00:19:26.195 00:19:26.195 Firmware Slot Information 00:19:26.195 ========================= 00:19:26.195 Active slot: 1 00:19:26.195 Slot 1 Firmware Revision: 24.05 00:19:26.195 00:19:26.195 00:19:26.195 Commands Supported and Effects 00:19:26.195 ============================== 00:19:26.195 Admin Commands 00:19:26.195 -------------- 00:19:26.195 Get Log Page (02h): Supported 00:19:26.195 Identify (06h): Supported 00:19:26.195 Abort (08h): Supported 00:19:26.195 Set Features (09h): Supported 00:19:26.195 Get Features (0Ah): Supported 00:19:26.195 Asynchronous Event Request (0Ch): Supported 00:19:26.195 Keep Alive (18h): Supported 00:19:26.195 I/O Commands 00:19:26.195 ------------ 00:19:26.195 Flush (00h): Supported LBA-Change 00:19:26.195 Write (01h): Supported LBA-Change 00:19:26.195 Read (02h): Supported 00:19:26.195 Compare (05h): Supported 00:19:26.195 Write Zeroes (08h): Supported LBA-Change 00:19:26.195 Dataset Management (09h): Supported LBA-Change 00:19:26.195 Copy (19h): Supported LBA-Change 00:19:26.195 Unknown (79h): Supported LBA-Change 00:19:26.195 Unknown (7Ah): Supported 00:19:26.195 00:19:26.195 Error Log 00:19:26.195 ========= 00:19:26.195 00:19:26.195 Arbitration 00:19:26.195 =========== 00:19:26.195 Arbitration Burst: 1 00:19:26.195 00:19:26.195 Power Management 00:19:26.195 ================ 00:19:26.195 Number of Power States: 1 00:19:26.195 Current Power State: Power State #0 00:19:26.195 Power State #0: 00:19:26.195 Max Power: 0.00 W 00:19:26.195 Non-Operational State: Operational 00:19:26.195 Entry Latency: Not Reported 00:19:26.195 Exit Latency: Not Reported 00:19:26.195 Relative Read Throughput: 0 00:19:26.195 Relative Read Latency: 0 00:19:26.195 Relative Write Throughput: 0 00:19:26.195 Relative Write Latency: 0 00:19:26.195 Idle Power: Not Reported 00:19:26.195 Active Power: Not Reported 00:19:26.195 Non-Operational Permissive Mode: Not Supported 00:19:26.195 00:19:26.195 Health Information 00:19:26.195 ================== 00:19:26.195 Critical Warnings: 00:19:26.195 Available Spare Space: OK 00:19:26.195 Temperature: OK 00:19:26.195 Device Reliability: OK 00:19:26.195 Read Only: No 00:19:26.195 Volatile Memory Backup: OK 00:19:26.195 Current Temperature: 0 Kelvin (-2[2024-05-16 09:32:19.567189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:26.195 [2024-05-16 09:32:19.575057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:26.195 [2024-05-16 09:32:19.575085] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:19:26.195 [2024-05-16 09:32:19.575093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.196 [2024-05-16 09:32:19.575100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.196 [2024-05-16 09:32:19.575106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.196 [2024-05-16 09:32:19.575112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.196 [2024-05-16 09:32:19.575152] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:26.196 [2024-05-16 09:32:19.575162] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:26.196 [2024-05-16 09:32:19.576154] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:26.196 [2024-05-16 09:32:19.576203] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:19:26.196 [2024-05-16 09:32:19.576209] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:19:26.196 [2024-05-16 09:32:19.577158] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:26.196 [2024-05-16 09:32:19.577170] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:19:26.196 [2024-05-16 09:32:19.577217] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:26.196 [2024-05-16 09:32:19.580059] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:26.196 73 Celsius) 00:19:26.196 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:26.196 Available Spare: 0% 00:19:26.196 Available Spare Threshold: 0% 00:19:26.196 Life Percentage Used: 0% 00:19:26.196 Data Units Read: 0 00:19:26.196 Data Units Written: 0 00:19:26.196 Host Read Commands: 0 00:19:26.196 Host Write Commands: 0 00:19:26.196 Controller Busy Time: 0 minutes 00:19:26.196 Power Cycles: 0 00:19:26.196 Power On Hours: 0 hours 00:19:26.196 Unsafe Shutdowns: 0 00:19:26.196 Unrecoverable Media Errors: 0 00:19:26.196 Lifetime Error Log Entries: 0 00:19:26.196 Warning Temperature Time: 0 minutes 00:19:26.196 Critical Temperature Time: 0 minutes 00:19:26.196 00:19:26.196 Number of Queues 00:19:26.196 ================ 00:19:26.196 Number of I/O Submission Queues: 127 00:19:26.196 Number of I/O Completion Queues: 127 00:19:26.196 00:19:26.196 Active Namespaces 00:19:26.196 ================= 00:19:26.196 Namespace ID:1 00:19:26.196 Error Recovery Timeout: Unlimited 00:19:26.196 Command Set Identifier: NVM (00h) 00:19:26.196 Deallocate: Supported 00:19:26.196 Deallocated/Unwritten Error: Not Supported 00:19:26.196 Deallocated Read Value: Unknown 00:19:26.196 Deallocate in Write Zeroes: Not Supported 00:19:26.196 Deallocated Guard Field: 0xFFFF 00:19:26.196 Flush: Supported 00:19:26.196 Reservation: Supported 00:19:26.196 Namespace Sharing Capabilities: Multiple Controllers 00:19:26.196 Size (in LBAs): 131072 (0GiB) 00:19:26.196 Capacity (in LBAs): 131072 (0GiB) 00:19:26.196 Utilization (in LBAs): 131072 (0GiB) 00:19:26.196 NGUID: 072FF3AEF7A84763B93F09413AB68E30 00:19:26.196 UUID: 072ff3ae-f7a8-4763-b93f-09413ab68e30 00:19:26.196 Thin Provisioning: Not Supported 00:19:26.196 Per-NS Atomic Units: Yes 00:19:26.196 Atomic Boundary Size (Normal): 0 00:19:26.196 Atomic Boundary Size (PFail): 0 00:19:26.196 Atomic Boundary Offset: 0 00:19:26.196 Maximum Single Source Range Length: 65535 00:19:26.196 Maximum Copy Length: 65535 00:19:26.196 Maximum Source Range Count: 1 00:19:26.196 NGUID/EUI64 Never Reused: No 00:19:26.196 Namespace Write Protected: No 00:19:26.196 Number of LBA Formats: 1 00:19:26.196 Current LBA Format: LBA Format #00 00:19:26.196 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:26.196 00:19:26.196 09:32:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:26.196 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.458 [2024-05-16 09:32:19.764066] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:31.747 Initializing NVMe Controllers 00:19:31.747 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:31.747 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:31.747 Initialization complete. Launching workers. 00:19:31.747 ======================================================== 00:19:31.747 Latency(us) 00:19:31.747 Device Information : IOPS MiB/s Average min max 00:19:31.747 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40036.80 156.39 3199.44 832.56 6844.28 00:19:31.747 ======================================================== 00:19:31.747 Total : 40036.80 156.39 3199.44 832.56 6844.28 00:19:31.747 00:19:31.747 [2024-05-16 09:32:24.871237] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:31.747 09:32:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:31.747 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.747 [2024-05-16 09:32:25.042809] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:37.037 Initializing NVMe Controllers 00:19:37.037 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:37.037 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:37.037 Initialization complete. Launching workers. 00:19:37.037 ======================================================== 00:19:37.037 Latency(us) 00:19:37.037 Device Information : IOPS MiB/s Average min max 00:19:37.037 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35902.26 140.24 3564.57 1105.04 7380.88 00:19:37.037 ======================================================== 00:19:37.037 Total : 35902.26 140.24 3564.57 1105.04 7380.88 00:19:37.037 00:19:37.037 [2024-05-16 09:32:30.060519] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:37.037 09:32:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:37.037 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.037 [2024-05-16 09:32:30.251670] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:42.332 [2024-05-16 09:32:35.390137] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:42.332 Initializing NVMe Controllers 00:19:42.332 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:42.332 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:42.332 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:42.332 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:42.332 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:42.332 Initialization complete. Launching workers. 00:19:42.332 Starting thread on core 2 00:19:42.332 Starting thread on core 3 00:19:42.332 Starting thread on core 1 00:19:42.332 09:32:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:42.332 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.332 [2024-05-16 09:32:35.650589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:45.642 [2024-05-16 09:32:38.732795] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:45.642 Initializing NVMe Controllers 00:19:45.642 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:45.642 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:45.642 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:45.642 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:45.642 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:45.642 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:45.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:45.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:45.642 Initialization complete. Launching workers. 00:19:45.642 Starting thread on core 1 with urgent priority queue 00:19:45.642 Starting thread on core 2 with urgent priority queue 00:19:45.642 Starting thread on core 3 with urgent priority queue 00:19:45.642 Starting thread on core 0 with urgent priority queue 00:19:45.642 SPDK bdev Controller (SPDK2 ) core 0: 15825.33 IO/s 6.32 secs/100000 ios 00:19:45.642 SPDK bdev Controller (SPDK2 ) core 1: 8046.67 IO/s 12.43 secs/100000 ios 00:19:45.642 SPDK bdev Controller (SPDK2 ) core 2: 16242.67 IO/s 6.16 secs/100000 ios 00:19:45.642 SPDK bdev Controller (SPDK2 ) core 3: 8233.67 IO/s 12.15 secs/100000 ios 00:19:45.642 ======================================================== 00:19:45.642 00:19:45.642 09:32:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:45.642 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.642 [2024-05-16 09:32:38.992509] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:45.642 Initializing NVMe Controllers 00:19:45.642 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:45.642 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:45.642 Namespace ID: 1 size: 0GB 00:19:45.642 Initialization complete. 00:19:45.642 INFO: using host memory buffer for IO 00:19:45.642 Hello world! 00:19:45.642 [2024-05-16 09:32:39.004593] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:45.642 09:32:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:45.642 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.903 [2024-05-16 09:32:39.262341] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:46.847 Initializing NVMe Controllers 00:19:46.847 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:46.847 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:46.847 Initialization complete. Launching workers. 00:19:46.847 submit (in ns) avg, min, max = 8853.7, 3936.7, 4000651.7 00:19:46.847 complete (in ns) avg, min, max = 18576.0, 2393.3, 8011045.8 00:19:46.847 00:19:46.847 Submit histogram 00:19:46.847 ================ 00:19:46.847 Range in us Cumulative Count 00:19:46.847 3.920 - 3.947: 0.2787% ( 54) 00:19:46.847 3.947 - 3.973: 2.1572% ( 364) 00:19:46.847 3.973 - 4.000: 7.6947% ( 1073) 00:19:46.847 4.000 - 4.027: 16.6486% ( 1735) 00:19:46.847 4.027 - 4.053: 27.9558% ( 2191) 00:19:46.847 4.053 - 4.080: 39.7017% ( 2276) 00:19:46.847 4.080 - 4.107: 51.5611% ( 2298) 00:19:46.847 4.107 - 4.133: 65.3300% ( 2668) 00:19:46.847 4.133 - 4.160: 79.5995% ( 2765) 00:19:46.847 4.160 - 4.187: 89.8075% ( 1978) 00:19:46.847 4.187 - 4.213: 95.4689% ( 1097) 00:19:46.847 4.213 - 4.240: 98.1628% ( 522) 00:19:46.847 4.240 - 4.267: 99.2001% ( 201) 00:19:46.847 4.267 - 4.293: 99.4117% ( 41) 00:19:46.847 4.293 - 4.320: 99.4684% ( 11) 00:19:46.847 4.320 - 4.347: 99.4788% ( 2) 00:19:46.847 4.347 - 4.373: 99.4839% ( 1) 00:19:46.847 4.373 - 4.400: 99.4891% ( 1) 00:19:46.847 4.453 - 4.480: 99.4942% ( 1) 00:19:46.847 4.533 - 4.560: 99.4994% ( 1) 00:19:46.847 4.560 - 4.587: 99.5046% ( 1) 00:19:46.847 4.667 - 4.693: 99.5097% ( 1) 00:19:46.847 4.800 - 4.827: 99.5149% ( 1) 00:19:46.847 5.200 - 5.227: 99.5200% ( 1) 00:19:46.847 5.280 - 5.307: 99.5252% ( 1) 00:19:46.847 5.307 - 5.333: 99.5304% ( 1) 00:19:46.847 5.360 - 5.387: 99.5355% ( 1) 00:19:46.847 5.440 - 5.467: 99.5407% ( 1) 00:19:46.847 5.520 - 5.547: 99.5459% ( 1) 00:19:46.847 5.573 - 5.600: 99.5510% ( 1) 00:19:46.847 5.653 - 5.680: 99.5562% ( 1) 00:19:46.847 6.053 - 6.080: 99.5665% ( 2) 00:19:46.847 6.080 - 6.107: 99.5717% ( 1) 00:19:46.847 6.107 - 6.133: 99.5768% ( 1) 00:19:46.847 6.187 - 6.213: 99.5820% ( 1) 00:19:46.847 6.213 - 6.240: 99.5871% ( 1) 00:19:46.847 6.267 - 6.293: 99.5923% ( 1) 00:19:46.847 6.293 - 6.320: 99.6026% ( 2) 00:19:46.847 6.320 - 6.347: 99.6181% ( 3) 00:19:46.847 6.347 - 6.373: 99.6233% ( 1) 00:19:46.847 6.373 - 6.400: 99.6284% ( 1) 00:19:46.847 6.480 - 6.507: 99.6336% ( 1) 00:19:46.847 6.560 - 6.587: 99.6387% ( 1) 00:19:46.847 6.587 - 6.613: 99.6439% ( 1) 00:19:46.847 6.720 - 6.747: 99.6491% ( 1) 00:19:46.847 6.747 - 6.773: 99.6542% ( 1) 00:19:46.847 7.040 - 7.093: 99.6646% ( 2) 00:19:46.847 7.093 - 7.147: 99.6697% ( 1) 00:19:46.847 7.200 - 7.253: 99.6749% ( 1) 00:19:46.847 7.253 - 7.307: 99.6852% ( 2) 00:19:46.847 7.360 - 7.413: 99.6955% ( 2) 00:19:46.847 7.467 - 7.520: 99.7213% ( 5) 00:19:46.847 7.520 - 7.573: 99.7265% ( 1) 00:19:46.847 7.627 - 7.680: 99.7523% ( 5) 00:19:46.847 7.680 - 7.733: 99.7574% ( 1) 00:19:46.847 7.733 - 7.787: 99.7729% ( 3) 00:19:46.847 7.787 - 7.840: 99.7832% ( 2) 00:19:46.847 7.840 - 7.893: 99.7987% ( 3) 00:19:46.847 7.893 - 7.947: 99.8039% ( 1) 00:19:46.847 8.000 - 8.053: 99.8091% ( 1) 00:19:46.847 8.053 - 8.107: 99.8194% ( 2) 00:19:46.847 8.107 - 8.160: 99.8297% ( 2) 00:19:46.847 8.213 - 8.267: 99.8349% ( 1) 00:19:46.847 8.267 - 8.320: 99.8400% ( 1) 00:19:46.847 8.320 - 8.373: 99.8452% ( 1) 00:19:46.847 8.427 - 8.480: 99.8503% ( 1) 00:19:46.847 8.587 - 8.640: 99.8555% ( 1) 00:19:46.847 9.440 - 9.493: 99.8607% ( 1) 00:19:46.847 10.080 - 10.133: 99.8658% ( 1) 00:19:46.847 12.747 - 12.800: 99.8710% ( 1) 00:19:46.847 14.080 - 14.187: 99.8761% ( 1) 00:19:46.847 16.320 - 16.427: 99.8813% ( 1) 00:19:46.847 3986.773 - 4014.080: 100.0000% ( 23) 00:19:46.847 00:19:46.848 Complete histogram 00:19:46.848 ================== 00:19:46.848 Range in us Cumulative Count 00:19:46.848 2.387 - 2.400: 1.5534% ( 301) 00:19:46.848 2.400 - 2.413: 6.4097% ( 941) 00:19:46.848 2.413 - 2.427: 6.9154% ( 98) 00:19:46.848 2.427 - 2.440: 8.2366% ( 256) 00:19:46.848 2.440 - [2024-05-16 09:32:40.358754] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:46.848 2.453: 8.8868% ( 126) 00:19:46.848 2.453 - 2.467: 52.5468% ( 8460) 00:19:46.848 2.467 - 2.480: 61.2169% ( 1680) 00:19:46.848 2.480 - 2.493: 74.2736% ( 2530) 00:19:46.848 2.493 - 2.507: 81.1942% ( 1341) 00:19:46.848 2.507 - 2.520: 83.4133% ( 430) 00:19:46.848 2.520 - 2.533: 86.6698% ( 631) 00:19:46.848 2.533 - 2.547: 91.3764% ( 912) 00:19:46.848 2.547 - 2.560: 95.1282% ( 727) 00:19:46.848 2.560 - 2.573: 97.3370% ( 428) 00:19:46.848 2.573 - 2.587: 98.6530% ( 255) 00:19:46.848 2.587 - 2.600: 99.1382% ( 94) 00:19:46.848 2.600 - 2.613: 99.3085% ( 33) 00:19:46.848 2.613 - 2.627: 99.3446% ( 7) 00:19:46.848 4.693 - 4.720: 99.3549% ( 2) 00:19:46.848 4.720 - 4.747: 99.3601% ( 1) 00:19:46.848 4.747 - 4.773: 99.3652% ( 1) 00:19:46.848 4.960 - 4.987: 99.3704% ( 1) 00:19:46.848 5.067 - 5.093: 99.3755% ( 1) 00:19:46.848 5.227 - 5.253: 99.3807% ( 1) 00:19:46.848 5.333 - 5.360: 99.3859% ( 1) 00:19:46.848 5.360 - 5.387: 99.3910% ( 1) 00:19:46.848 5.387 - 5.413: 99.3962% ( 1) 00:19:46.848 5.413 - 5.440: 99.4014% ( 1) 00:19:46.848 5.520 - 5.547: 99.4065% ( 1) 00:19:46.848 5.653 - 5.680: 99.4168% ( 2) 00:19:46.848 5.680 - 5.707: 99.4220% ( 1) 00:19:46.848 5.760 - 5.787: 99.4272% ( 1) 00:19:46.848 5.840 - 5.867: 99.4375% ( 2) 00:19:46.848 5.867 - 5.893: 99.4478% ( 2) 00:19:46.848 5.920 - 5.947: 99.4530% ( 1) 00:19:46.848 5.947 - 5.973: 99.4581% ( 1) 00:19:46.848 6.000 - 6.027: 99.4633% ( 1) 00:19:46.848 6.080 - 6.107: 99.4736% ( 2) 00:19:46.848 6.133 - 6.160: 99.4839% ( 2) 00:19:46.848 6.213 - 6.240: 99.4891% ( 1) 00:19:46.848 6.293 - 6.320: 99.4942% ( 1) 00:19:46.848 6.347 - 6.373: 99.4994% ( 1) 00:19:46.848 6.373 - 6.400: 99.5046% ( 1) 00:19:46.848 6.427 - 6.453: 99.5149% ( 2) 00:19:46.848 6.453 - 6.480: 99.5200% ( 1) 00:19:46.848 6.480 - 6.507: 99.5355% ( 3) 00:19:46.848 6.507 - 6.533: 99.5459% ( 2) 00:19:46.848 6.560 - 6.587: 99.5510% ( 1) 00:19:46.848 6.667 - 6.693: 99.5562% ( 1) 00:19:46.848 6.693 - 6.720: 99.5613% ( 1) 00:19:46.848 7.253 - 7.307: 99.5665% ( 1) 00:19:46.848 7.840 - 7.893: 99.5768% ( 2) 00:19:46.848 9.493 - 9.547: 99.5820% ( 1) 00:19:46.848 12.000 - 12.053: 99.5871% ( 1) 00:19:46.848 12.693 - 12.747: 99.5923% ( 1) 00:19:46.848 13.333 - 13.387: 99.5975% ( 1) 00:19:46.848 90.027 - 90.453: 99.6026% ( 1) 00:19:46.848 3986.773 - 4014.080: 99.9948% ( 76) 00:19:46.848 7973.547 - 8028.160: 100.0000% ( 1) 00:19:46.848 00:19:47.108 09:32:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:47.108 09:32:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:47.108 09:32:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:47.108 09:32:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:47.109 09:32:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:47.109 [ 00:19:47.109 { 00:19:47.109 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:47.109 "subtype": "Discovery", 00:19:47.109 "listen_addresses": [], 00:19:47.109 "allow_any_host": true, 00:19:47.109 "hosts": [] 00:19:47.109 }, 00:19:47.109 { 00:19:47.109 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:47.109 "subtype": "NVMe", 00:19:47.109 "listen_addresses": [ 00:19:47.109 { 00:19:47.109 "trtype": "VFIOUSER", 00:19:47.109 "adrfam": "IPv4", 00:19:47.109 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:47.109 "trsvcid": "0" 00:19:47.109 } 00:19:47.109 ], 00:19:47.109 "allow_any_host": true, 00:19:47.109 "hosts": [], 00:19:47.109 "serial_number": "SPDK1", 00:19:47.109 "model_number": "SPDK bdev Controller", 00:19:47.109 "max_namespaces": 32, 00:19:47.109 "min_cntlid": 1, 00:19:47.109 "max_cntlid": 65519, 00:19:47.109 "namespaces": [ 00:19:47.109 { 00:19:47.109 "nsid": 1, 00:19:47.109 "bdev_name": "Malloc1", 00:19:47.109 "name": "Malloc1", 00:19:47.109 "nguid": "25F1FE840ED64AA083D65651F7A50E73", 00:19:47.109 "uuid": "25f1fe84-0ed6-4aa0-83d6-5651f7a50e73" 00:19:47.109 }, 00:19:47.109 { 00:19:47.109 "nsid": 2, 00:19:47.109 "bdev_name": "Malloc3", 00:19:47.109 "name": "Malloc3", 00:19:47.109 "nguid": "D8021D5576CF438E95EFD8D08E3DA23B", 00:19:47.109 "uuid": "d8021d55-76cf-438e-95ef-d8d08e3da23b" 00:19:47.109 } 00:19:47.109 ] 00:19:47.109 }, 00:19:47.109 { 00:19:47.109 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:47.109 "subtype": "NVMe", 00:19:47.109 "listen_addresses": [ 00:19:47.109 { 00:19:47.109 "trtype": "VFIOUSER", 00:19:47.109 "adrfam": "IPv4", 00:19:47.109 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:47.109 "trsvcid": "0" 00:19:47.109 } 00:19:47.109 ], 00:19:47.109 "allow_any_host": true, 00:19:47.109 "hosts": [], 00:19:47.109 "serial_number": "SPDK2", 00:19:47.109 "model_number": "SPDK bdev Controller", 00:19:47.109 "max_namespaces": 32, 00:19:47.109 "min_cntlid": 1, 00:19:47.109 "max_cntlid": 65519, 00:19:47.109 "namespaces": [ 00:19:47.109 { 00:19:47.109 "nsid": 1, 00:19:47.109 "bdev_name": "Malloc2", 00:19:47.109 "name": "Malloc2", 00:19:47.109 "nguid": "072FF3AEF7A84763B93F09413AB68E30", 00:19:47.109 "uuid": "072ff3ae-f7a8-4763-b93f-09413ab68e30" 00:19:47.109 } 00:19:47.109 ] 00:19:47.109 } 00:19:47.109 ] 00:19:47.109 09:32:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:47.109 09:32:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=263123 00:19:47.109 09:32:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:47.109 09:32:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:19:47.109 09:32:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:47.109 09:32:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:47.109 09:32:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:19:47.109 09:32:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=1 00:19:47.109 09:32:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:19:47.109 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.370 09:32:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:47.370 09:32:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:19:47.370 09:32:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=2 00:19:47.370 09:32:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:19:47.370 [2024-05-16 09:32:40.741139] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:47.370 09:32:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:47.370 09:32:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:47.370 09:32:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:19:47.370 09:32:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:47.370 09:32:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:47.631 Malloc4 00:19:47.631 09:32:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:47.631 [2024-05-16 09:32:41.112513] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:47.631 09:32:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:47.631 Asynchronous Event Request test 00:19:47.631 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:47.631 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:47.631 Registering asynchronous event callbacks... 00:19:47.631 Starting namespace attribute notice tests for all controllers... 00:19:47.631 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:47.631 aer_cb - Changed Namespace 00:19:47.631 Cleaning up... 00:19:47.892 [ 00:19:47.892 { 00:19:47.892 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:47.892 "subtype": "Discovery", 00:19:47.892 "listen_addresses": [], 00:19:47.892 "allow_any_host": true, 00:19:47.892 "hosts": [] 00:19:47.892 }, 00:19:47.892 { 00:19:47.892 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:47.892 "subtype": "NVMe", 00:19:47.892 "listen_addresses": [ 00:19:47.892 { 00:19:47.892 "trtype": "VFIOUSER", 00:19:47.892 "adrfam": "IPv4", 00:19:47.892 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:47.892 "trsvcid": "0" 00:19:47.892 } 00:19:47.892 ], 00:19:47.892 "allow_any_host": true, 00:19:47.892 "hosts": [], 00:19:47.892 "serial_number": "SPDK1", 00:19:47.892 "model_number": "SPDK bdev Controller", 00:19:47.892 "max_namespaces": 32, 00:19:47.892 "min_cntlid": 1, 00:19:47.892 "max_cntlid": 65519, 00:19:47.892 "namespaces": [ 00:19:47.892 { 00:19:47.892 "nsid": 1, 00:19:47.892 "bdev_name": "Malloc1", 00:19:47.892 "name": "Malloc1", 00:19:47.892 "nguid": "25F1FE840ED64AA083D65651F7A50E73", 00:19:47.892 "uuid": "25f1fe84-0ed6-4aa0-83d6-5651f7a50e73" 00:19:47.892 }, 00:19:47.892 { 00:19:47.892 "nsid": 2, 00:19:47.892 "bdev_name": "Malloc3", 00:19:47.892 "name": "Malloc3", 00:19:47.892 "nguid": "D8021D5576CF438E95EFD8D08E3DA23B", 00:19:47.892 "uuid": "d8021d55-76cf-438e-95ef-d8d08e3da23b" 00:19:47.892 } 00:19:47.892 ] 00:19:47.892 }, 00:19:47.892 { 00:19:47.892 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:47.892 "subtype": "NVMe", 00:19:47.892 "listen_addresses": [ 00:19:47.892 { 00:19:47.892 "trtype": "VFIOUSER", 00:19:47.892 "adrfam": "IPv4", 00:19:47.892 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:47.892 "trsvcid": "0" 00:19:47.892 } 00:19:47.892 ], 00:19:47.892 "allow_any_host": true, 00:19:47.892 "hosts": [], 00:19:47.892 "serial_number": "SPDK2", 00:19:47.892 "model_number": "SPDK bdev Controller", 00:19:47.892 "max_namespaces": 32, 00:19:47.892 "min_cntlid": 1, 00:19:47.892 "max_cntlid": 65519, 00:19:47.892 "namespaces": [ 00:19:47.892 { 00:19:47.892 "nsid": 1, 00:19:47.892 "bdev_name": "Malloc2", 00:19:47.892 "name": "Malloc2", 00:19:47.892 "nguid": "072FF3AEF7A84763B93F09413AB68E30", 00:19:47.892 "uuid": "072ff3ae-f7a8-4763-b93f-09413ab68e30" 00:19:47.892 }, 00:19:47.892 { 00:19:47.892 "nsid": 2, 00:19:47.892 "bdev_name": "Malloc4", 00:19:47.892 "name": "Malloc4", 00:19:47.892 "nguid": "5F6E19C3355E465E800A07D4F6EC1A06", 00:19:47.892 "uuid": "5f6e19c3-355e-465e-800a-07d4f6ec1a06" 00:19:47.892 } 00:19:47.892 ] 00:19:47.892 } 00:19:47.892 ] 00:19:47.892 09:32:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 263123 00:19:47.893 09:32:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:47.893 09:32:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 253909 00:19:47.893 09:32:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 253909 ']' 00:19:47.893 09:32:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 253909 00:19:47.893 09:32:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:19:47.893 09:32:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:47.893 09:32:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 253909 00:19:47.893 09:32:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:47.893 09:32:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:47.893 09:32:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 253909' 00:19:47.893 killing process with pid 253909 00:19:47.893 09:32:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 253909 00:19:47.893 [2024-05-16 09:32:41.365393] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:47.893 09:32:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 253909 00:19:48.154 09:32:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:48.154 09:32:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:48.154 09:32:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:48.154 09:32:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:48.154 09:32:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:48.154 09:32:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=263310 00:19:48.154 09:32:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 263310' 00:19:48.154 Process pid: 263310 00:19:48.154 09:32:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:48.154 09:32:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:48.154 09:32:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 263310 00:19:48.154 09:32:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 263310 ']' 00:19:48.154 09:32:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.154 09:32:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:48.154 09:32:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.154 09:32:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:48.154 09:32:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:48.154 [2024-05-16 09:32:41.592857] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:48.154 [2024-05-16 09:32:41.593783] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:19:48.155 [2024-05-16 09:32:41.593821] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.155 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.155 [2024-05-16 09:32:41.653112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:48.415 [2024-05-16 09:32:41.717208] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.415 [2024-05-16 09:32:41.717250] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.415 [2024-05-16 09:32:41.717258] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.415 [2024-05-16 09:32:41.717265] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.415 [2024-05-16 09:32:41.717270] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.415 [2024-05-16 09:32:41.717407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.415 [2024-05-16 09:32:41.717520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.415 [2024-05-16 09:32:41.717677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.415 [2024-05-16 09:32:41.717678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.416 [2024-05-16 09:32:41.781665] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:48.416 [2024-05-16 09:32:41.781700] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:48.416 [2024-05-16 09:32:41.782598] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:48.416 [2024-05-16 09:32:41.783299] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:48.416 [2024-05-16 09:32:41.783369] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:48.989 09:32:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:48.989 09:32:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:19:48.989 09:32:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:49.932 09:32:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:50.192 09:32:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:50.192 09:32:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:50.192 09:32:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:50.192 09:32:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:50.192 09:32:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:50.192 Malloc1 00:19:50.192 09:32:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:50.454 09:32:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:50.714 09:32:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:50.714 [2024-05-16 09:32:44.206095] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:50.714 09:32:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:50.714 09:32:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:50.714 09:32:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:50.975 Malloc2 00:19:50.975 09:32:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:51.236 09:32:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:51.236 09:32:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:51.498 09:32:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:51.498 09:32:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 263310 00:19:51.498 09:32:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 263310 ']' 00:19:51.498 09:32:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 263310 00:19:51.498 09:32:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:19:51.498 09:32:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:51.498 09:32:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 263310 00:19:51.498 09:32:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:51.498 09:32:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:51.498 09:32:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 263310' 00:19:51.498 killing process with pid 263310 00:19:51.498 09:32:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 263310 00:19:51.498 [2024-05-16 09:32:44.976702] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:51.498 09:32:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 263310 00:19:51.759 09:32:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:51.759 09:32:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:51.759 00:19:51.759 real 0m50.931s 00:19:51.759 user 3m21.818s 00:19:51.759 sys 0m3.080s 00:19:51.759 09:32:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:51.759 09:32:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:51.759 ************************************ 00:19:51.759 END TEST nvmf_vfio_user 00:19:51.759 ************************************ 00:19:51.759 09:32:45 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:51.759 09:32:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:51.759 09:32:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:51.759 09:32:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:51.759 ************************************ 00:19:51.759 START TEST nvmf_vfio_user_nvme_compliance 00:19:51.759 ************************************ 00:19:51.759 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:51.759 * Looking for test storage... 00:19:51.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:51.759 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:51.759 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:51.759 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.759 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.759 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.759 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.759 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.759 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.759 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.759 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.759 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.759 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.022 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:19:52.022 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:19:52.022 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.022 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.022 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.022 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.022 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:52.022 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.022 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.022 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.022 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.022 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.022 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.022 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=264061 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 264061' 00:19:52.023 Process pid: 264061 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 264061 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 264061 ']' 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:52.023 09:32:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:52.023 [2024-05-16 09:32:45.385165] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:19:52.023 [2024-05-16 09:32:45.385213] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.023 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.023 [2024-05-16 09:32:45.445669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:52.023 [2024-05-16 09:32:45.511704] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.023 [2024-05-16 09:32:45.511741] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.023 [2024-05-16 09:32:45.511749] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.023 [2024-05-16 09:32:45.511755] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.023 [2024-05-16 09:32:45.511761] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.023 [2024-05-16 09:32:45.511903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.023 [2024-05-16 09:32:45.512041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.023 [2024-05-16 09:32:45.512044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.967 09:32:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:52.967 09:32:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:19:52.967 09:32:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:53.912 malloc0 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:53.912 [2024-05-16 09:32:47.244775] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.912 09:32:47 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:53.912 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.912 00:19:53.912 00:19:53.912 CUnit - A unit testing framework for C - Version 2.1-3 00:19:53.912 http://cunit.sourceforge.net/ 00:19:53.912 00:19:53.912 00:19:53.912 Suite: nvme_compliance 00:19:53.912 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-16 09:32:47.410306] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:53.912 [2024-05-16 09:32:47.411631] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:53.912 [2024-05-16 09:32:47.411642] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:53.912 [2024-05-16 09:32:47.411647] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:53.912 [2024-05-16 09:32:47.413330] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:53.912 passed 00:19:54.174 Test: admin_identify_ctrlr_verify_fused ...[2024-05-16 09:32:47.507969] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.174 [2024-05-16 09:32:47.514009] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.174 passed 00:19:54.174 Test: admin_identify_ns ...[2024-05-16 09:32:47.606320] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.174 [2024-05-16 09:32:47.670063] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:54.174 [2024-05-16 09:32:47.678061] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:54.174 [2024-05-16 09:32:47.699174] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.435 passed 00:19:54.435 Test: admin_get_features_mandatory_features ...[2024-05-16 09:32:47.789820] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.435 [2024-05-16 09:32:47.792840] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.435 passed 00:19:54.435 Test: admin_get_features_optional_features ...[2024-05-16 09:32:47.888392] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.435 [2024-05-16 09:32:47.891408] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.435 passed 00:19:54.435 Test: admin_set_features_number_of_queues ...[2024-05-16 09:32:47.983293] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.696 [2024-05-16 09:32:48.088162] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.696 passed 00:19:54.696 Test: admin_get_log_page_mandatory_logs ...[2024-05-16 09:32:48.182164] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.696 [2024-05-16 09:32:48.185189] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.696 passed 00:19:54.957 Test: admin_get_log_page_with_lpo ...[2024-05-16 09:32:48.278287] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.957 [2024-05-16 09:32:48.346066] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:54.957 [2024-05-16 09:32:48.359116] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.957 passed 00:19:54.957 Test: fabric_property_get ...[2024-05-16 09:32:48.453160] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:54.957 [2024-05-16 09:32:48.454380] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:54.957 [2024-05-16 09:32:48.456174] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:54.957 passed 00:19:55.218 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-16 09:32:48.549706] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:55.218 [2024-05-16 09:32:48.550955] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:55.218 [2024-05-16 09:32:48.553737] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:55.218 passed 00:19:55.218 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-16 09:32:48.647324] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:55.218 [2024-05-16 09:32:48.731059] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:55.218 [2024-05-16 09:32:48.747056] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:55.218 [2024-05-16 09:32:48.752143] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:55.479 passed 00:19:55.479 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-16 09:32:48.844165] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:55.479 [2024-05-16 09:32:48.845397] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:55.479 [2024-05-16 09:32:48.847185] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:55.479 passed 00:19:55.479 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-16 09:32:48.942341] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:55.480 [2024-05-16 09:32:49.019062] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:55.741 [2024-05-16 09:32:49.043070] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:55.741 [2024-05-16 09:32:49.048135] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:55.741 passed 00:19:55.741 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-16 09:32:49.138792] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:55.741 [2024-05-16 09:32:49.140020] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:55.741 [2024-05-16 09:32:49.140040] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:55.741 [2024-05-16 09:32:49.141814] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:55.741 passed 00:19:55.741 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-16 09:32:49.235324] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:56.002 [2024-05-16 09:32:49.335060] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:56.002 [2024-05-16 09:32:49.343067] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:56.002 [2024-05-16 09:32:49.351060] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:56.002 [2024-05-16 09:32:49.359058] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:56.002 [2024-05-16 09:32:49.391158] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:56.002 passed 00:19:56.002 Test: admin_create_io_sq_verify_pc ...[2024-05-16 09:32:49.481780] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:56.002 [2024-05-16 09:32:49.497067] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:56.002 [2024-05-16 09:32:49.514925] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:56.002 passed 00:19:56.264 Test: admin_create_io_qp_max_qps ...[2024-05-16 09:32:49.608493] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:57.205 [2024-05-16 09:32:50.728064] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:19:57.778 [2024-05-16 09:32:51.109293] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:57.778 passed 00:19:57.778 Test: admin_create_io_sq_shared_cq ...[2024-05-16 09:32:51.197306] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:57.778 [2024-05-16 09:32:51.329064] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:58.039 [2024-05-16 09:32:51.366125] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:58.039 passed 00:19:58.039 00:19:58.039 Run Summary: Type Total Ran Passed Failed Inactive 00:19:58.039 suites 1 1 n/a 0 0 00:19:58.039 tests 18 18 18 0 0 00:19:58.039 asserts 360 360 360 0 n/a 00:19:58.039 00:19:58.039 Elapsed time = 1.660 seconds 00:19:58.039 09:32:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 264061 00:19:58.039 09:32:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 264061 ']' 00:19:58.039 09:32:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 264061 00:19:58.039 09:32:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:19:58.039 09:32:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:58.039 09:32:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 264061 00:19:58.039 09:32:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:58.039 09:32:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:58.039 09:32:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 264061' 00:19:58.039 killing process with pid 264061 00:19:58.039 09:32:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 264061 00:19:58.039 [2024-05-16 09:32:51.475811] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:58.039 09:32:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 264061 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:58.301 00:19:58.301 real 0m6.413s 00:19:58.301 user 0m18.403s 00:19:58.301 sys 0m0.437s 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:58.301 ************************************ 00:19:58.301 END TEST nvmf_vfio_user_nvme_compliance 00:19:58.301 ************************************ 00:19:58.301 09:32:51 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:58.301 09:32:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:58.301 09:32:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:58.301 09:32:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:58.301 ************************************ 00:19:58.301 START TEST nvmf_vfio_user_fuzz 00:19:58.301 ************************************ 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:58.301 * Looking for test storage... 00:19:58.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.301 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=265455 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 265455' 00:19:58.302 Process pid: 265455 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 265455 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 265455 ']' 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:58.302 09:32:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:59.244 09:32:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:59.244 09:32:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:19:59.244 09:32:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:00.186 malloc0 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.186 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:00.447 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.447 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:20:00.447 09:32:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:32.561 Fuzzing completed. Shutting down the fuzz application 00:20:32.561 00:20:32.561 Dumping successful admin opcodes: 00:20:32.561 8, 9, 10, 24, 00:20:32.561 Dumping successful io opcodes: 00:20:32.561 0, 00:20:32.561 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1074615, total successful commands: 4237, random_seed: 1775327744 00:20:32.561 NS: 0x200003a1ef00 admin qp, Total commands completed: 134983, total successful commands: 1090, random_seed: 3611340224 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 265455 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 265455 ']' 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 265455 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 265455 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 265455' 00:20:32.561 killing process with pid 265455 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 265455 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 265455 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:32.561 00:20:32.561 real 0m33.675s 00:20:32.561 user 0m38.337s 00:20:32.561 sys 0m24.484s 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:32.561 09:33:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:32.561 ************************************ 00:20:32.561 END TEST nvmf_vfio_user_fuzz 00:20:32.561 ************************************ 00:20:32.561 09:33:25 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:32.561 09:33:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:32.561 09:33:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:32.561 09:33:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:32.561 ************************************ 00:20:32.561 START TEST nvmf_host_management 00:20:32.561 ************************************ 00:20:32.561 09:33:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:32.561 * Looking for test storage... 00:20:32.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:32.561 09:33:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:32.561 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:20:32.561 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.561 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.561 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.561 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:20:32.562 09:33:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:39.160 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:39.160 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:39.161 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:39.161 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:39.161 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:39.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:20:39.161 00:20:39.161 --- 10.0.0.2 ping statistics --- 00:20:39.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.161 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:39.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:20:39.161 00:20:39.161 --- 10.0.0.1 ping statistics --- 00:20:39.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.161 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=276317 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 276317 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 276317 ']' 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:39.161 09:33:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:39.161 [2024-05-16 09:33:32.608189] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:20:39.161 [2024-05-16 09:33:32.608237] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.161 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.161 [2024-05-16 09:33:32.690180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:39.423 [2024-05-16 09:33:32.767475] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.423 [2024-05-16 09:33:32.767526] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.423 [2024-05-16 09:33:32.767534] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.423 [2024-05-16 09:33:32.767541] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.423 [2024-05-16 09:33:32.767547] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.423 [2024-05-16 09:33:32.767663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.423 [2024-05-16 09:33:32.767824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:39.423 [2024-05-16 09:33:32.767984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.423 [2024-05-16 09:33:32.767985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:39.996 [2024-05-16 09:33:33.424606] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:39.996 Malloc0 00:20:39.996 [2024-05-16 09:33:33.487767] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:39.996 [2024-05-16 09:33:33.487994] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=276439 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 276439 /var/tmp/bdevperf.sock 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 276439 ']' 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:39.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.996 { 00:20:39.996 "params": { 00:20:39.996 "name": "Nvme$subsystem", 00:20:39.996 "trtype": "$TEST_TRANSPORT", 00:20:39.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.996 "adrfam": "ipv4", 00:20:39.996 "trsvcid": "$NVMF_PORT", 00:20:39.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.996 "hdgst": ${hdgst:-false}, 00:20:39.996 "ddgst": ${ddgst:-false} 00:20:39.996 }, 00:20:39.996 "method": "bdev_nvme_attach_controller" 00:20:39.996 } 00:20:39.996 EOF 00:20:39.996 )") 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:20:39.996 09:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:20:40.256 09:33:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:40.256 "params": { 00:20:40.256 "name": "Nvme0", 00:20:40.256 "trtype": "tcp", 00:20:40.256 "traddr": "10.0.0.2", 00:20:40.256 "adrfam": "ipv4", 00:20:40.256 "trsvcid": "4420", 00:20:40.256 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:40.256 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:40.256 "hdgst": false, 00:20:40.256 "ddgst": false 00:20:40.256 }, 00:20:40.256 "method": "bdev_nvme_attach_controller" 00:20:40.256 }' 00:20:40.256 [2024-05-16 09:33:33.587109] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:20:40.256 [2024-05-16 09:33:33.587160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276439 ] 00:20:40.256 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.256 [2024-05-16 09:33:33.645565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.256 [2024-05-16 09:33:33.709898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.517 Running I/O for 10 seconds... 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.089 09:33:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:41.089 [2024-05-16 09:33:34.434983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.089 [2024-05-16 09:33:34.435465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.090 [2024-05-16 09:33:34.435472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x619ac0 is same with the state(5) to be set 00:20:41.090 [2024-05-16 09:33:34.435979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.090 [2024-05-16 09:33:34.436734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.090 [2024-05-16 09:33:34.436742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.436753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.436760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.436771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.436779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.436788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.436796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.436806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.436815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.436825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.436833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.436843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.436851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.436862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.436870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.436880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.436888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.436898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.436907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.436918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.436926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.436936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.436944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.436956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.436965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.436976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.436983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.436993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.437001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.437011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.437019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.437030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.437038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.437048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.437060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.437070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.437078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.437088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.437096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.437106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.437115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.437125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.437134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.437144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.437152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.437162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.437170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.437181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.091 [2024-05-16 09:33:34.437190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.091 [2024-05-16 09:33:34.437200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226a3b0 is same with the state(5) to be set 00:20:41.091 [2024-05-16 09:33:34.437242] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x226a3b0 was disconnected and freed. reset controller. 00:20:41.091 [2024-05-16 09:33:34.438465] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:41.091 09:33:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.091 task offset: 98304 on job bdev=Nvme0n1 fails 00:20:41.091 00:20:41.091 Latency(us) 00:20:41.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.091 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.091 Job: Nvme0n1 ended in about 0.58 seconds with error 00:20:41.091 Verification LBA range: start 0x0 length 0x400 00:20:41.091 Nvme0n1 : 0.58 1319.65 82.48 109.97 0.00 43752.58 5679.79 35607.89 00:20:41.091 =================================================================================================================== 00:20:41.091 Total : 1319.65 82.48 109.97 0.00 43752.58 5679.79 35607.89 00:20:41.091 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:20:41.091 [2024-05-16 09:33:34.440479] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:41.091 [2024-05-16 09:33:34.440503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e314a0 (9): Bad file descriptor 00:20:41.091 09:33:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.091 09:33:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:41.091 09:33:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.091 09:33:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:20:41.091 [2024-05-16 09:33:34.457126] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:42.035 09:33:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 276439 00:20:42.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (276439) - No such process 00:20:42.035 09:33:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:20:42.035 09:33:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:20:42.035 09:33:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:42.035 09:33:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:20:42.035 09:33:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:20:42.035 09:33:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:20:42.035 09:33:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:42.035 09:33:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:42.035 { 00:20:42.035 "params": { 00:20:42.035 "name": "Nvme$subsystem", 00:20:42.035 "trtype": "$TEST_TRANSPORT", 00:20:42.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.035 "adrfam": "ipv4", 00:20:42.035 "trsvcid": "$NVMF_PORT", 00:20:42.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.035 "hdgst": ${hdgst:-false}, 00:20:42.035 "ddgst": ${ddgst:-false} 00:20:42.035 }, 00:20:42.035 "method": "bdev_nvme_attach_controller" 00:20:42.035 } 00:20:42.035 EOF 00:20:42.035 )") 00:20:42.035 09:33:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:20:42.035 09:33:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:20:42.035 09:33:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:20:42.035 09:33:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:42.035 "params": { 00:20:42.035 "name": "Nvme0", 00:20:42.035 "trtype": "tcp", 00:20:42.035 "traddr": "10.0.0.2", 00:20:42.035 "adrfam": "ipv4", 00:20:42.035 "trsvcid": "4420", 00:20:42.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:42.035 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:42.035 "hdgst": false, 00:20:42.035 "ddgst": false 00:20:42.035 }, 00:20:42.035 "method": "bdev_nvme_attach_controller" 00:20:42.035 }' 00:20:42.035 [2024-05-16 09:33:35.506714] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:20:42.035 [2024-05-16 09:33:35.506768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276898 ] 00:20:42.035 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.035 [2024-05-16 09:33:35.564571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.297 [2024-05-16 09:33:35.629065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.558 Running I/O for 1 seconds... 00:20:43.500 00:20:43.500 Latency(us) 00:20:43.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.500 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.500 Verification LBA range: start 0x0 length 0x400 00:20:43.500 Nvme0n1 : 1.02 1687.75 105.48 0.00 0.00 37242.10 5406.72 31675.73 00:20:43.500 =================================================================================================================== 00:20:43.500 Total : 1687.75 105.48 0.00 0.00 37242.10 5406.72 31675.73 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:43.761 rmmod nvme_tcp 00:20:43.761 rmmod nvme_fabrics 00:20:43.761 rmmod nvme_keyring 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 276317 ']' 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 276317 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 276317 ']' 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 276317 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 276317 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 276317' 00:20:43.761 killing process with pid 276317 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 276317 00:20:43.761 [2024-05-16 09:33:37.231035] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:43.761 09:33:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 276317 00:20:44.023 [2024-05-16 09:33:37.336409] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:20:44.023 09:33:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:44.023 09:33:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:44.023 09:33:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:44.023 09:33:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:44.023 09:33:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:44.023 09:33:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.023 09:33:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:44.023 09:33:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.951 09:33:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:45.951 09:33:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:45.951 00:20:45.951 real 0m13.986s 00:20:45.951 user 0m22.974s 00:20:45.951 sys 0m6.077s 00:20:45.951 09:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:45.951 09:33:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:45.951 ************************************ 00:20:45.951 END TEST nvmf_host_management 00:20:45.951 ************************************ 00:20:45.951 09:33:39 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:20:45.952 09:33:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:45.952 09:33:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:45.952 09:33:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:45.952 ************************************ 00:20:45.952 START TEST nvmf_lvol 00:20:45.952 ************************************ 00:20:45.952 09:33:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:20:46.212 * Looking for test storage... 00:20:46.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:46.212 09:33:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:46.212 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:20:46.213 09:33:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:52.805 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:52.805 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:52.805 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:52.805 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:52.805 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.806 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.806 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:52.806 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.806 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.806 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:52.806 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:52.806 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.806 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:53.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:20:53.067 00:20:53.067 --- 10.0.0.2 ping statistics --- 00:20:53.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.067 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:20:53.067 00:20:53.067 --- 10.0.0.1 ping statistics --- 00:20:53.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.067 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=281377 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 281377 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 281377 ']' 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:53.067 09:33:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:20:53.328 [2024-05-16 09:33:46.669059] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:20:53.328 [2024-05-16 09:33:46.669123] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.328 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.328 [2024-05-16 09:33:46.739398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:53.328 [2024-05-16 09:33:46.813179] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.328 [2024-05-16 09:33:46.813222] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.328 [2024-05-16 09:33:46.813229] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.328 [2024-05-16 09:33:46.813236] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.328 [2024-05-16 09:33:46.813241] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.328 [2024-05-16 09:33:46.813377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.328 [2024-05-16 09:33:46.813491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.328 [2024-05-16 09:33:46.813496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.900 09:33:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:53.900 09:33:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:20:53.900 09:33:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:53.900 09:33:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:53.900 09:33:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:20:54.161 09:33:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.161 09:33:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:54.161 [2024-05-16 09:33:47.634201] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.161 09:33:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:54.422 09:33:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:20:54.422 09:33:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:54.683 09:33:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:20:54.683 09:33:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:20:54.683 09:33:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:20:54.944 09:33:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=08261120-d034-430d-b48d-7a77931eba6d 00:20:54.944 09:33:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 08261120-d034-430d-b48d-7a77931eba6d lvol 20 00:20:55.204 09:33:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3f7c752f-3578-46f5-804e-0aaeda3fc1cc 00:20:55.205 09:33:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:55.205 09:33:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3f7c752f-3578-46f5-804e-0aaeda3fc1cc 00:20:55.466 09:33:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:55.466 [2024-05-16 09:33:49.024932] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:55.466 [2024-05-16 09:33:49.025179] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.728 09:33:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:55.728 09:33:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=281880 00:20:55.728 09:33:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:20:55.728 09:33:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:20:55.728 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.670 09:33:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3f7c752f-3578-46f5-804e-0aaeda3fc1cc MY_SNAPSHOT 00:20:56.930 09:33:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=37267fcd-09b6-4076-9069-aa18b5b053ff 00:20:56.930 09:33:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3f7c752f-3578-46f5-804e-0aaeda3fc1cc 30 00:20:57.191 09:33:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 37267fcd-09b6-4076-9069-aa18b5b053ff MY_CLONE 00:20:57.453 09:33:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bf89fb80-73ae-42ca-a31f-0254ce588c91 00:20:57.453 09:33:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bf89fb80-73ae-42ca-a31f-0254ce588c91 00:20:57.714 09:33:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 281880 00:21:06.144 Initializing NVMe Controllers 00:21:06.144 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:21:06.144 Controller IO queue size 128, less than required. 00:21:06.144 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:21:06.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:21:06.144 Initialization complete. Launching workers. 00:21:06.144 ======================================================== 00:21:06.144 Latency(us) 00:21:06.144 Device Information : IOPS MiB/s Average min max 00:21:06.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12154.50 47.48 10536.36 1469.08 56664.02 00:21:06.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17738.80 69.29 7216.05 2051.75 51091.97 00:21:06.144 ======================================================== 00:21:06.144 Total : 29893.30 116.77 8566.08 1469.08 56664.02 00:21:06.144 00:21:06.144 09:33:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:06.404 09:33:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3f7c752f-3578-46f5-804e-0aaeda3fc1cc 00:21:06.404 09:33:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08261120-d034-430d-b48d-7a77931eba6d 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:06.664 rmmod nvme_tcp 00:21:06.664 rmmod nvme_fabrics 00:21:06.664 rmmod nvme_keyring 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 281377 ']' 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 281377 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 281377 ']' 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 281377 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 281377 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 281377' 00:21:06.664 killing process with pid 281377 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 281377 00:21:06.664 [2024-05-16 09:34:00.174412] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:06.664 09:34:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 281377 00:21:06.925 09:34:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:06.925 09:34:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:06.925 09:34:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:06.925 09:34:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:06.925 09:34:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:06.925 09:34:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.925 09:34:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.925 09:34:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.839 09:34:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:09.100 00:21:09.100 real 0m22.896s 00:21:09.100 user 1m3.839s 00:21:09.100 sys 0m7.373s 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:21:09.100 ************************************ 00:21:09.100 END TEST nvmf_lvol 00:21:09.100 ************************************ 00:21:09.100 09:34:02 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:21:09.100 09:34:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:09.100 09:34:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:09.100 09:34:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:09.100 ************************************ 00:21:09.100 START TEST nvmf_lvs_grow 00:21:09.100 ************************************ 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:21:09.100 * Looking for test storage... 00:21:09.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:09.100 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.101 09:34:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:09.101 09:34:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.101 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:09.101 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:09.101 09:34:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:21:09.101 09:34:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:17.244 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:17.244 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:17.244 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:17.244 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:17.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:21:17.244 00:21:17.244 --- 10.0.0.2 ping statistics --- 00:21:17.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.244 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:17.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:21:17.244 00:21:17.244 --- 10.0.0.1 ping statistics --- 00:21:17.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.244 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=288116 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 288116 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 288116 ']' 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:17.244 09:34:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.245 09:34:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:17.245 09:34:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:21:17.245 [2024-05-16 09:34:09.643264] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:21:17.245 [2024-05-16 09:34:09.643313] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.245 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.245 [2024-05-16 09:34:09.708717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.245 [2024-05-16 09:34:09.773350] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.245 [2024-05-16 09:34:09.773388] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.245 [2024-05-16 09:34:09.773395] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.245 [2024-05-16 09:34:09.773402] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.245 [2024-05-16 09:34:09.773407] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.245 [2024-05-16 09:34:09.773426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:17.245 [2024-05-16 09:34:10.596493] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:21:17.245 ************************************ 00:21:17.245 START TEST lvs_grow_clean 00:21:17.245 ************************************ 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:17.245 09:34:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:17.505 09:34:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:21:17.505 09:34:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:21:17.505 09:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2fb1a6cd-ab26-42c7-9dc5-999be5ffd1f8 00:21:17.506 09:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2fb1a6cd-ab26-42c7-9dc5-999be5ffd1f8 00:21:17.506 09:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:21:17.766 09:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:21:17.766 09:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:21:17.766 09:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2fb1a6cd-ab26-42c7-9dc5-999be5ffd1f8 lvol 150 00:21:18.026 09:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c2d5816d-c568-40af-9413-7adff5793335 00:21:18.026 09:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:18.026 09:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:21:18.026 [2024-05-16 09:34:11.505148] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:21:18.026 [2024-05-16 09:34:11.505201] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:21:18.026 true 00:21:18.026 09:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2fb1a6cd-ab26-42c7-9dc5-999be5ffd1f8 00:21:18.026 09:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:21:18.287 09:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:21:18.287 09:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:18.548 09:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c2d5816d-c568-40af-9413-7adff5793335 00:21:18.548 09:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:18.809 [2024-05-16 09:34:12.130860] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:18.809 [2024-05-16 09:34:12.131082] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.809 09:34:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:18.809 09:34:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=288804 00:21:18.809 09:34:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:18.809 09:34:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:21:18.809 09:34:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 288804 /var/tmp/bdevperf.sock 00:21:18.809 09:34:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 288804 ']' 00:21:18.809 09:34:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.809 09:34:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:18.809 09:34:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.809 09:34:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:18.809 09:34:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:21:18.809 [2024-05-16 09:34:12.349411] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:21:18.809 [2024-05-16 09:34:12.349464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid288804 ] 00:21:19.069 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.069 [2024-05-16 09:34:12.427884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.069 [2024-05-16 09:34:12.492012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.641 09:34:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:19.641 09:34:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:21:19.641 09:34:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:21:19.902 Nvme0n1 00:21:19.902 09:34:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:21:19.902 [ 00:21:19.902 { 00:21:19.902 "name": "Nvme0n1", 00:21:19.902 "aliases": [ 00:21:19.902 "c2d5816d-c568-40af-9413-7adff5793335" 00:21:19.902 ], 00:21:19.902 "product_name": "NVMe disk", 00:21:19.902 "block_size": 4096, 00:21:19.902 "num_blocks": 38912, 00:21:19.902 "uuid": "c2d5816d-c568-40af-9413-7adff5793335", 00:21:19.902 "assigned_rate_limits": { 00:21:19.902 "rw_ios_per_sec": 0, 00:21:19.902 "rw_mbytes_per_sec": 0, 00:21:19.902 "r_mbytes_per_sec": 0, 00:21:19.902 "w_mbytes_per_sec": 0 00:21:19.902 }, 00:21:19.902 "claimed": false, 00:21:19.902 "zoned": false, 00:21:19.902 "supported_io_types": { 00:21:19.902 "read": true, 00:21:19.902 "write": true, 00:21:19.902 "unmap": true, 00:21:19.902 "write_zeroes": true, 00:21:19.902 "flush": true, 00:21:19.902 "reset": true, 00:21:19.902 "compare": true, 00:21:19.902 "compare_and_write": true, 00:21:19.902 "abort": true, 00:21:19.902 "nvme_admin": true, 00:21:19.902 "nvme_io": true 00:21:19.902 }, 00:21:19.902 "memory_domains": [ 00:21:19.902 { 00:21:19.902 "dma_device_id": "system", 00:21:19.902 "dma_device_type": 1 00:21:19.902 } 00:21:19.902 ], 00:21:19.902 "driver_specific": { 00:21:19.902 "nvme": [ 00:21:19.902 { 00:21:19.902 "trid": { 00:21:19.902 "trtype": "TCP", 00:21:19.902 "adrfam": "IPv4", 00:21:19.902 "traddr": "10.0.0.2", 00:21:19.902 "trsvcid": "4420", 00:21:19.902 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:19.902 }, 00:21:19.902 "ctrlr_data": { 00:21:19.902 "cntlid": 1, 00:21:19.902 "vendor_id": "0x8086", 00:21:19.902 "model_number": "SPDK bdev Controller", 00:21:19.902 "serial_number": "SPDK0", 00:21:19.902 "firmware_revision": "24.05", 00:21:19.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:19.902 "oacs": { 00:21:19.902 "security": 0, 00:21:19.902 "format": 0, 00:21:19.902 "firmware": 0, 00:21:19.902 "ns_manage": 0 00:21:19.902 }, 00:21:19.902 "multi_ctrlr": true, 00:21:19.902 "ana_reporting": false 00:21:19.902 }, 00:21:19.902 "vs": { 00:21:19.902 "nvme_version": "1.3" 00:21:19.902 }, 00:21:19.902 "ns_data": { 00:21:19.902 "id": 1, 00:21:19.902 "can_share": true 00:21:19.902 } 00:21:19.902 } 00:21:19.902 ], 00:21:19.902 "mp_policy": "active_passive" 00:21:19.902 } 00:21:19.902 } 00:21:19.902 ] 00:21:20.163 09:34:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=289040 00:21:20.163 09:34:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:21:20.163 09:34:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:20.163 Running I/O for 10 seconds... 00:21:21.105 Latency(us) 00:21:21.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:21.105 Nvme0n1 : 1.00 18093.00 70.68 0.00 0.00 0.00 0.00 0.00 00:21:21.105 =================================================================================================================== 00:21:21.105 Total : 18093.00 70.68 0.00 0.00 0.00 0.00 0.00 00:21:21.105 00:21:22.048 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2fb1a6cd-ab26-42c7-9dc5-999be5ffd1f8 00:21:22.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:22.048 Nvme0n1 : 2.00 18188.00 71.05 0.00 0.00 0.00 0.00 0.00 00:21:22.048 =================================================================================================================== 00:21:22.048 Total : 18188.00 71.05 0.00 0.00 0.00 0.00 0.00 00:21:22.048 00:21:22.310 true 00:21:22.310 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2fb1a6cd-ab26-42c7-9dc5-999be5ffd1f8 00:21:22.310 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:21:22.310 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:21:22.310 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:21:22.310 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 289040 00:21:23.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:23.253 Nvme0n1 : 3.00 18219.33 71.17 0.00 0.00 0.00 0.00 0.00 00:21:23.253 =================================================================================================================== 00:21:23.253 Total : 18219.33 71.17 0.00 0.00 0.00 0.00 0.00 00:21:23.253 00:21:24.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:24.195 Nvme0n1 : 4.00 18251.00 71.29 0.00 0.00 0.00 0.00 0.00 00:21:24.195 =================================================================================================================== 00:21:24.195 Total : 18251.00 71.29 0.00 0.00 0.00 0.00 0.00 00:21:24.195 00:21:25.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:25.137 Nvme0n1 : 5.00 18278.60 71.40 0.00 0.00 0.00 0.00 0.00 00:21:25.137 =================================================================================================================== 00:21:25.137 Total : 18278.60 71.40 0.00 0.00 0.00 0.00 0.00 00:21:25.137 00:21:26.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:26.078 Nvme0n1 : 6.00 18299.50 71.48 0.00 0.00 0.00 0.00 0.00 00:21:26.078 =================================================================================================================== 00:21:26.078 Total : 18299.50 71.48 0.00 0.00 0.00 0.00 0.00 00:21:26.078 00:21:27.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:27.022 Nvme0n1 : 7.00 18321.43 71.57 0.00 0.00 0.00 0.00 0.00 00:21:27.022 =================================================================================================================== 00:21:27.022 Total : 18321.43 71.57 0.00 0.00 0.00 0.00 0.00 00:21:27.022 00:21:28.408 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:28.408 Nvme0n1 : 8.00 18339.00 71.64 0.00 0.00 0.00 0.00 0.00 00:21:28.408 =================================================================================================================== 00:21:28.408 Total : 18339.00 71.64 0.00 0.00 0.00 0.00 0.00 00:21:28.408 00:21:29.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:29.350 Nvme0n1 : 9.00 18346.89 71.67 0.00 0.00 0.00 0.00 0.00 00:21:29.350 =================================================================================================================== 00:21:29.350 Total : 18346.89 71.67 0.00 0.00 0.00 0.00 0.00 00:21:29.350 00:21:30.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:30.291 Nvme0n1 : 10.00 18357.90 71.71 0.00 0.00 0.00 0.00 0.00 00:21:30.291 =================================================================================================================== 00:21:30.291 Total : 18357.90 71.71 0.00 0.00 0.00 0.00 0.00 00:21:30.291 00:21:30.291 00:21:30.291 Latency(us) 00:21:30.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:30.291 Nvme0n1 : 10.01 18358.87 71.71 0.00 0.00 6968.63 2116.27 13325.65 00:21:30.291 =================================================================================================================== 00:21:30.291 Total : 18358.87 71.71 0.00 0.00 6968.63 2116.27 13325.65 00:21:30.291 0 00:21:30.291 09:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 288804 00:21:30.291 09:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 288804 ']' 00:21:30.291 09:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 288804 00:21:30.291 09:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:21:30.291 09:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:30.291 09:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 288804 00:21:30.291 09:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:30.291 09:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:30.291 09:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 288804' 00:21:30.291 killing process with pid 288804 00:21:30.291 09:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 288804 00:21:30.291 Received shutdown signal, test time was about 10.000000 seconds 00:21:30.291 00:21:30.291 Latency(us) 00:21:30.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.291 =================================================================================================================== 00:21:30.291 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.291 09:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 288804 00:21:30.291 09:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:30.551 09:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:30.551 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2fb1a6cd-ab26-42c7-9dc5-999be5ffd1f8 00:21:30.551 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:21:30.812 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:21:30.812 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:21:30.812 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:31.074 [2024-05-16 09:34:24.382177] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:21:31.074 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2fb1a6cd-ab26-42c7-9dc5-999be5ffd1f8 00:21:31.074 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:21:31.074 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2fb1a6cd-ab26-42c7-9dc5-999be5ffd1f8 00:21:31.074 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:31.074 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.074 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:31.074 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.074 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:31.074 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.074 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:31.074 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:21:31.074 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2fb1a6cd-ab26-42c7-9dc5-999be5ffd1f8 00:21:31.074 request: 00:21:31.074 { 00:21:31.074 "uuid": "2fb1a6cd-ab26-42c7-9dc5-999be5ffd1f8", 00:21:31.074 "method": "bdev_lvol_get_lvstores", 00:21:31.074 "req_id": 1 00:21:31.074 } 00:21:31.074 Got JSON-RPC error response 00:21:31.074 response: 00:21:31.074 { 00:21:31.074 "code": -19, 00:21:31.074 "message": "No such device" 00:21:31.074 } 00:21:31.074 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:21:31.074 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:31.074 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:31.074 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:31.074 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:31.334 aio_bdev 00:21:31.334 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c2d5816d-c568-40af-9413-7adff5793335 00:21:31.334 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=c2d5816d-c568-40af-9413-7adff5793335 00:21:31.334 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:31.334 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:21:31.334 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:31.334 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:31.334 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:31.334 09:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c2d5816d-c568-40af-9413-7adff5793335 -t 2000 00:21:31.596 [ 00:21:31.596 { 00:21:31.596 "name": "c2d5816d-c568-40af-9413-7adff5793335", 00:21:31.596 "aliases": [ 00:21:31.596 "lvs/lvol" 00:21:31.596 ], 00:21:31.596 "product_name": "Logical Volume", 00:21:31.596 "block_size": 4096, 00:21:31.596 "num_blocks": 38912, 00:21:31.596 "uuid": "c2d5816d-c568-40af-9413-7adff5793335", 00:21:31.596 "assigned_rate_limits": { 00:21:31.596 "rw_ios_per_sec": 0, 00:21:31.596 "rw_mbytes_per_sec": 0, 00:21:31.596 "r_mbytes_per_sec": 0, 00:21:31.596 "w_mbytes_per_sec": 0 00:21:31.596 }, 00:21:31.596 "claimed": false, 00:21:31.596 "zoned": false, 00:21:31.596 "supported_io_types": { 00:21:31.596 "read": true, 00:21:31.596 "write": true, 00:21:31.596 "unmap": true, 00:21:31.596 "write_zeroes": true, 00:21:31.596 "flush": false, 00:21:31.596 "reset": true, 00:21:31.596 "compare": false, 00:21:31.596 "compare_and_write": false, 00:21:31.596 "abort": false, 00:21:31.596 "nvme_admin": false, 00:21:31.596 "nvme_io": false 00:21:31.596 }, 00:21:31.596 "driver_specific": { 00:21:31.596 "lvol": { 00:21:31.596 "lvol_store_uuid": "2fb1a6cd-ab26-42c7-9dc5-999be5ffd1f8", 00:21:31.596 "base_bdev": "aio_bdev", 00:21:31.596 "thin_provision": false, 00:21:31.596 "num_allocated_clusters": 38, 00:21:31.596 "snapshot": false, 00:21:31.596 "clone": false, 00:21:31.596 "esnap_clone": false 00:21:31.596 } 00:21:31.596 } 00:21:31.596 } 00:21:31.596 ] 00:21:31.596 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:21:31.596 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2fb1a6cd-ab26-42c7-9dc5-999be5ffd1f8 00:21:31.596 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:21:31.857 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:21:31.857 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2fb1a6cd-ab26-42c7-9dc5-999be5ffd1f8 00:21:31.857 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:21:31.857 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:21:31.857 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c2d5816d-c568-40af-9413-7adff5793335 00:21:32.117 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2fb1a6cd-ab26-42c7-9dc5-999be5ffd1f8 00:21:32.378 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:32.378 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:32.378 00:21:32.378 real 0m15.205s 00:21:32.378 user 0m14.890s 00:21:32.378 sys 0m1.292s 00:21:32.378 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:32.378 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:21:32.378 ************************************ 00:21:32.378 END TEST lvs_grow_clean 00:21:32.378 ************************************ 00:21:32.378 09:34:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:21:32.378 09:34:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:32.378 09:34:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:32.378 09:34:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:21:32.638 ************************************ 00:21:32.639 START TEST lvs_grow_dirty 00:21:32.639 ************************************ 00:21:32.639 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:21:32.639 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:21:32.639 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:21:32.639 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:21:32.639 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:21:32.639 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:21:32.639 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:21:32.639 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:32.639 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:32.639 09:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:32.639 09:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:21:32.639 09:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:21:32.899 09:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=14e1571e-bb54-4ee1-ae2d-014b810ad6dc 00:21:32.899 09:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14e1571e-bb54-4ee1-ae2d-014b810ad6dc 00:21:32.899 09:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:21:32.899 09:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:21:32.899 09:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:21:32.899 09:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 14e1571e-bb54-4ee1-ae2d-014b810ad6dc lvol 150 00:21:33.160 09:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=614258ec-7342-4bb5-899a-9d68d6da1803 00:21:33.160 09:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:33.160 09:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:21:33.160 [2024-05-16 09:34:26.712044] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:21:33.160 [2024-05-16 09:34:26.712101] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:21:33.160 true 00:21:33.420 09:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14e1571e-bb54-4ee1-ae2d-014b810ad6dc 00:21:33.420 09:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:21:33.420 09:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:21:33.420 09:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:33.681 09:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 614258ec-7342-4bb5-899a-9d68d6da1803 00:21:33.681 09:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:33.941 [2024-05-16 09:34:27.333917] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.941 09:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:33.941 09:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=291878 00:21:33.941 09:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:33.941 09:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 291878 /var/tmp/bdevperf.sock 00:21:33.941 09:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 291878 ']' 00:21:33.941 09:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:33.941 09:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:33.942 09:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:33.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:33.942 09:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:33.942 09:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:34.203 09:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:21:34.203 [2024-05-16 09:34:27.547948] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:21:34.203 [2024-05-16 09:34:27.547998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291878 ] 00:21:34.203 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.203 [2024-05-16 09:34:27.621102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.203 [2024-05-16 09:34:27.674859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.775 09:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:34.775 09:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:21:34.775 09:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:21:35.348 Nvme0n1 00:21:35.348 09:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:21:35.349 [ 00:21:35.349 { 00:21:35.349 "name": "Nvme0n1", 00:21:35.349 "aliases": [ 00:21:35.349 "614258ec-7342-4bb5-899a-9d68d6da1803" 00:21:35.349 ], 00:21:35.349 "product_name": "NVMe disk", 00:21:35.349 "block_size": 4096, 00:21:35.349 "num_blocks": 38912, 00:21:35.349 "uuid": "614258ec-7342-4bb5-899a-9d68d6da1803", 00:21:35.349 "assigned_rate_limits": { 00:21:35.349 "rw_ios_per_sec": 0, 00:21:35.349 "rw_mbytes_per_sec": 0, 00:21:35.349 "r_mbytes_per_sec": 0, 00:21:35.349 "w_mbytes_per_sec": 0 00:21:35.349 }, 00:21:35.349 "claimed": false, 00:21:35.349 "zoned": false, 00:21:35.349 "supported_io_types": { 00:21:35.349 "read": true, 00:21:35.349 "write": true, 00:21:35.349 "unmap": true, 00:21:35.349 "write_zeroes": true, 00:21:35.349 "flush": true, 00:21:35.349 "reset": true, 00:21:35.349 "compare": true, 00:21:35.349 "compare_and_write": true, 00:21:35.349 "abort": true, 00:21:35.349 "nvme_admin": true, 00:21:35.349 "nvme_io": true 00:21:35.349 }, 00:21:35.349 "memory_domains": [ 00:21:35.349 { 00:21:35.349 "dma_device_id": "system", 00:21:35.349 "dma_device_type": 1 00:21:35.349 } 00:21:35.349 ], 00:21:35.349 "driver_specific": { 00:21:35.349 "nvme": [ 00:21:35.349 { 00:21:35.349 "trid": { 00:21:35.349 "trtype": "TCP", 00:21:35.349 "adrfam": "IPv4", 00:21:35.349 "traddr": "10.0.0.2", 00:21:35.349 "trsvcid": "4420", 00:21:35.349 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:35.349 }, 00:21:35.349 "ctrlr_data": { 00:21:35.349 "cntlid": 1, 00:21:35.349 "vendor_id": "0x8086", 00:21:35.349 "model_number": "SPDK bdev Controller", 00:21:35.349 "serial_number": "SPDK0", 00:21:35.349 "firmware_revision": "24.05", 00:21:35.349 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:35.349 "oacs": { 00:21:35.349 "security": 0, 00:21:35.349 "format": 0, 00:21:35.349 "firmware": 0, 00:21:35.349 "ns_manage": 0 00:21:35.349 }, 00:21:35.349 "multi_ctrlr": true, 00:21:35.349 "ana_reporting": false 00:21:35.349 }, 00:21:35.349 "vs": { 00:21:35.349 "nvme_version": "1.3" 00:21:35.349 }, 00:21:35.349 "ns_data": { 00:21:35.349 "id": 1, 00:21:35.349 "can_share": true 00:21:35.349 } 00:21:35.349 } 00:21:35.349 ], 00:21:35.349 "mp_policy": "active_passive" 00:21:35.349 } 00:21:35.349 } 00:21:35.349 ] 00:21:35.349 09:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=292082 00:21:35.349 09:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:21:35.349 09:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:35.610 Running I/O for 10 seconds... 00:21:36.553 Latency(us) 00:21:36.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:36.553 Nvme0n1 : 1.00 18028.00 70.42 0.00 0.00 0.00 0.00 0.00 00:21:36.553 =================================================================================================================== 00:21:36.553 Total : 18028.00 70.42 0.00 0.00 0.00 0.00 0.00 00:21:36.553 00:21:37.498 09:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 14e1571e-bb54-4ee1-ae2d-014b810ad6dc 00:21:37.498 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:37.498 Nvme0n1 : 2.00 18192.00 71.06 0.00 0.00 0.00 0.00 0.00 00:21:37.498 =================================================================================================================== 00:21:37.498 Total : 18192.00 71.06 0.00 0.00 0.00 0.00 0.00 00:21:37.498 00:21:37.498 true 00:21:37.498 09:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14e1571e-bb54-4ee1-ae2d-014b810ad6dc 00:21:37.498 09:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:21:37.759 09:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:21:37.759 09:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:21:37.759 09:34:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 292082 00:21:38.703 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:38.703 Nvme0n1 : 3.00 18256.67 71.32 0.00 0.00 0.00 0.00 0.00 00:21:38.703 =================================================================================================================== 00:21:38.703 Total : 18256.67 71.32 0.00 0.00 0.00 0.00 0.00 00:21:38.703 00:21:39.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:39.647 Nvme0n1 : 4.00 18301.50 71.49 0.00 0.00 0.00 0.00 0.00 00:21:39.648 =================================================================================================================== 00:21:39.648 Total : 18301.50 71.49 0.00 0.00 0.00 0.00 0.00 00:21:39.648 00:21:40.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:40.592 Nvme0n1 : 5.00 18336.60 71.63 0.00 0.00 0.00 0.00 0.00 00:21:40.593 =================================================================================================================== 00:21:40.593 Total : 18336.60 71.63 0.00 0.00 0.00 0.00 0.00 00:21:40.593 00:21:41.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:41.539 Nvme0n1 : 6.00 18356.83 71.71 0.00 0.00 0.00 0.00 0.00 00:21:41.539 =================================================================================================================== 00:21:41.539 Total : 18356.83 71.71 0.00 0.00 0.00 0.00 0.00 00:21:41.539 00:21:42.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:42.482 Nvme0n1 : 7.00 18371.14 71.76 0.00 0.00 0.00 0.00 0.00 00:21:42.482 =================================================================================================================== 00:21:42.482 Total : 18371.14 71.76 0.00 0.00 0.00 0.00 0.00 00:21:42.482 00:21:43.427 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:43.427 Nvme0n1 : 8.00 18383.62 71.81 0.00 0.00 0.00 0.00 0.00 00:21:43.427 =================================================================================================================== 00:21:43.427 Total : 18383.62 71.81 0.00 0.00 0.00 0.00 0.00 00:21:43.427 00:21:44.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:44.812 Nvme0n1 : 9.00 18394.22 71.85 0.00 0.00 0.00 0.00 0.00 00:21:44.812 =================================================================================================================== 00:21:44.812 Total : 18394.22 71.85 0.00 0.00 0.00 0.00 0.00 00:21:44.812 00:21:45.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:45.755 Nvme0n1 : 10.00 18401.40 71.88 0.00 0.00 0.00 0.00 0.00 00:21:45.755 =================================================================================================================== 00:21:45.755 Total : 18401.40 71.88 0.00 0.00 0.00 0.00 0.00 00:21:45.755 00:21:45.755 00:21:45.755 Latency(us) 00:21:45.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:45.755 Nvme0n1 : 10.00 18405.35 71.90 0.00 0.00 6951.56 4232.53 17039.36 00:21:45.755 =================================================================================================================== 00:21:45.755 Total : 18405.35 71.90 0.00 0.00 6951.56 4232.53 17039.36 00:21:45.755 0 00:21:45.755 09:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 291878 00:21:45.755 09:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 291878 ']' 00:21:45.755 09:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 291878 00:21:45.755 09:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:21:45.755 09:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:45.755 09:34:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 291878 00:21:45.755 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:45.755 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:45.755 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 291878' 00:21:45.755 killing process with pid 291878 00:21:45.755 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 291878 00:21:45.755 Received shutdown signal, test time was about 10.000000 seconds 00:21:45.755 00:21:45.755 Latency(us) 00:21:45.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.755 =================================================================================================================== 00:21:45.755 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:45.755 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 291878 00:21:45.755 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:45.755 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:46.016 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14e1571e-bb54-4ee1-ae2d-014b810ad6dc 00:21:46.016 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:21:46.276 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:21:46.276 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:21:46.276 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 288116 00:21:46.276 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 288116 00:21:46.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 288116 Killed "${NVMF_APP[@]}" "$@" 00:21:46.277 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:21:46.277 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:21:46.277 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:46.277 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:46.277 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:46.277 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=294241 00:21:46.277 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 294241 00:21:46.277 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:46.277 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 294241 ']' 00:21:46.277 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.277 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:46.277 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.277 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:46.277 09:34:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:46.277 [2024-05-16 09:34:39.744994] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:21:46.277 [2024-05-16 09:34:39.745048] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.277 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.277 [2024-05-16 09:34:39.810770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.537 [2024-05-16 09:34:39.875956] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.537 [2024-05-16 09:34:39.875991] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.537 [2024-05-16 09:34:39.875999] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.537 [2024-05-16 09:34:39.876005] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.537 [2024-05-16 09:34:39.876010] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.537 [2024-05-16 09:34:39.876027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.109 09:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:47.109 09:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:21:47.109 09:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:47.109 09:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:47.109 09:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:47.109 09:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.109 09:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:47.371 [2024-05-16 09:34:40.676623] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:21:47.371 [2024-05-16 09:34:40.676715] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:21:47.371 [2024-05-16 09:34:40.676745] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:21:47.371 09:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:21:47.371 09:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 614258ec-7342-4bb5-899a-9d68d6da1803 00:21:47.371 09:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=614258ec-7342-4bb5-899a-9d68d6da1803 00:21:47.371 09:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:47.371 09:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:21:47.371 09:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:47.371 09:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:47.371 09:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:47.371 09:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 614258ec-7342-4bb5-899a-9d68d6da1803 -t 2000 00:21:47.632 [ 00:21:47.632 { 00:21:47.632 "name": "614258ec-7342-4bb5-899a-9d68d6da1803", 00:21:47.632 "aliases": [ 00:21:47.632 "lvs/lvol" 00:21:47.632 ], 00:21:47.632 "product_name": "Logical Volume", 00:21:47.632 "block_size": 4096, 00:21:47.632 "num_blocks": 38912, 00:21:47.632 "uuid": "614258ec-7342-4bb5-899a-9d68d6da1803", 00:21:47.632 "assigned_rate_limits": { 00:21:47.632 "rw_ios_per_sec": 0, 00:21:47.632 "rw_mbytes_per_sec": 0, 00:21:47.632 "r_mbytes_per_sec": 0, 00:21:47.632 "w_mbytes_per_sec": 0 00:21:47.632 }, 00:21:47.632 "claimed": false, 00:21:47.632 "zoned": false, 00:21:47.632 "supported_io_types": { 00:21:47.632 "read": true, 00:21:47.632 "write": true, 00:21:47.632 "unmap": true, 00:21:47.632 "write_zeroes": true, 00:21:47.632 "flush": false, 00:21:47.632 "reset": true, 00:21:47.632 "compare": false, 00:21:47.632 "compare_and_write": false, 00:21:47.632 "abort": false, 00:21:47.632 "nvme_admin": false, 00:21:47.632 "nvme_io": false 00:21:47.632 }, 00:21:47.632 "driver_specific": { 00:21:47.632 "lvol": { 00:21:47.632 "lvol_store_uuid": "14e1571e-bb54-4ee1-ae2d-014b810ad6dc", 00:21:47.632 "base_bdev": "aio_bdev", 00:21:47.632 "thin_provision": false, 00:21:47.632 "num_allocated_clusters": 38, 00:21:47.632 "snapshot": false, 00:21:47.632 "clone": false, 00:21:47.632 "esnap_clone": false 00:21:47.632 } 00:21:47.632 } 00:21:47.632 } 00:21:47.632 ] 00:21:47.632 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:21:47.632 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14e1571e-bb54-4ee1-ae2d-014b810ad6dc 00:21:47.632 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:21:47.632 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:21:47.632 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14e1571e-bb54-4ee1-ae2d-014b810ad6dc 00:21:47.632 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:21:47.893 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:21:47.893 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:47.893 [2024-05-16 09:34:41.424535] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:21:48.155 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14e1571e-bb54-4ee1-ae2d-014b810ad6dc 00:21:48.155 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:21:48.155 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14e1571e-bb54-4ee1-ae2d-014b810ad6dc 00:21:48.155 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:48.155 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:48.155 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:48.155 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:48.155 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:48.155 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:48.155 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:48.155 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:21:48.155 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14e1571e-bb54-4ee1-ae2d-014b810ad6dc 00:21:48.155 request: 00:21:48.155 { 00:21:48.155 "uuid": "14e1571e-bb54-4ee1-ae2d-014b810ad6dc", 00:21:48.155 "method": "bdev_lvol_get_lvstores", 00:21:48.155 "req_id": 1 00:21:48.155 } 00:21:48.155 Got JSON-RPC error response 00:21:48.155 response: 00:21:48.155 { 00:21:48.155 "code": -19, 00:21:48.155 "message": "No such device" 00:21:48.155 } 00:21:48.155 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:21:48.155 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:48.155 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:48.155 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:48.155 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:48.416 aio_bdev 00:21:48.416 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 614258ec-7342-4bb5-899a-9d68d6da1803 00:21:48.416 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=614258ec-7342-4bb5-899a-9d68d6da1803 00:21:48.416 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:48.416 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:21:48.416 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:48.416 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:48.416 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:48.416 09:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 614258ec-7342-4bb5-899a-9d68d6da1803 -t 2000 00:21:48.678 [ 00:21:48.678 { 00:21:48.678 "name": "614258ec-7342-4bb5-899a-9d68d6da1803", 00:21:48.678 "aliases": [ 00:21:48.678 "lvs/lvol" 00:21:48.678 ], 00:21:48.678 "product_name": "Logical Volume", 00:21:48.678 "block_size": 4096, 00:21:48.678 "num_blocks": 38912, 00:21:48.678 "uuid": "614258ec-7342-4bb5-899a-9d68d6da1803", 00:21:48.678 "assigned_rate_limits": { 00:21:48.678 "rw_ios_per_sec": 0, 00:21:48.678 "rw_mbytes_per_sec": 0, 00:21:48.678 "r_mbytes_per_sec": 0, 00:21:48.678 "w_mbytes_per_sec": 0 00:21:48.678 }, 00:21:48.678 "claimed": false, 00:21:48.678 "zoned": false, 00:21:48.678 "supported_io_types": { 00:21:48.678 "read": true, 00:21:48.678 "write": true, 00:21:48.678 "unmap": true, 00:21:48.678 "write_zeroes": true, 00:21:48.678 "flush": false, 00:21:48.678 "reset": true, 00:21:48.678 "compare": false, 00:21:48.678 "compare_and_write": false, 00:21:48.678 "abort": false, 00:21:48.678 "nvme_admin": false, 00:21:48.678 "nvme_io": false 00:21:48.678 }, 00:21:48.678 "driver_specific": { 00:21:48.678 "lvol": { 00:21:48.678 "lvol_store_uuid": "14e1571e-bb54-4ee1-ae2d-014b810ad6dc", 00:21:48.678 "base_bdev": "aio_bdev", 00:21:48.678 "thin_provision": false, 00:21:48.678 "num_allocated_clusters": 38, 00:21:48.678 "snapshot": false, 00:21:48.678 "clone": false, 00:21:48.678 "esnap_clone": false 00:21:48.678 } 00:21:48.678 } 00:21:48.678 } 00:21:48.678 ] 00:21:48.678 09:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:21:48.678 09:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14e1571e-bb54-4ee1-ae2d-014b810ad6dc 00:21:48.678 09:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:21:48.678 09:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:21:48.678 09:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 14e1571e-bb54-4ee1-ae2d-014b810ad6dc 00:21:48.678 09:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:21:48.939 09:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:21:48.939 09:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 614258ec-7342-4bb5-899a-9d68d6da1803 00:21:49.201 09:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 14e1571e-bb54-4ee1-ae2d-014b810ad6dc 00:21:49.201 09:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:21:49.462 00:21:49.462 real 0m16.915s 00:21:49.462 user 0m44.534s 00:21:49.462 sys 0m2.704s 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:49.462 ************************************ 00:21:49.462 END TEST lvs_grow_dirty 00:21:49.462 ************************************ 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:49.462 nvmf_trace.0 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:49.462 rmmod nvme_tcp 00:21:49.462 rmmod nvme_fabrics 00:21:49.462 rmmod nvme_keyring 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 294241 ']' 00:21:49.462 09:34:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 294241 00:21:49.462 09:34:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 294241 ']' 00:21:49.462 09:34:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 294241 00:21:49.462 09:34:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:21:49.462 09:34:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:49.462 09:34:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 294241 00:21:49.723 09:34:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:49.723 09:34:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:49.723 09:34:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 294241' 00:21:49.723 killing process with pid 294241 00:21:49.723 09:34:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 294241 00:21:49.723 09:34:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 294241 00:21:49.723 09:34:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:49.723 09:34:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:49.723 09:34:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:49.723 09:34:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:49.723 09:34:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:49.723 09:34:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.723 09:34:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.723 09:34:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.272 09:34:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:52.272 00:21:52.272 real 0m42.777s 00:21:52.272 user 1m5.234s 00:21:52.272 sys 0m9.561s 00:21:52.272 09:34:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:52.272 09:34:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:21:52.272 ************************************ 00:21:52.272 END TEST nvmf_lvs_grow 00:21:52.272 ************************************ 00:21:52.272 09:34:45 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:21:52.272 09:34:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:52.272 09:34:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:52.272 09:34:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:52.272 ************************************ 00:21:52.272 START TEST nvmf_bdev_io_wait 00:21:52.272 ************************************ 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:21:52.272 * Looking for test storage... 00:21:52.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:21:52.272 09:34:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:58.860 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:58.860 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:58.860 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:58.860 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:58.860 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:59.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:21:59.122 00:21:59.122 --- 10.0.0.2 ping statistics --- 00:21:59.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.122 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:59.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:21:59.122 00:21:59.122 --- 10.0.0.1 ping statistics --- 00:21:59.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.122 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=298974 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 298974 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 298974 ']' 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:59.122 09:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:59.122 [2024-05-16 09:34:52.613724] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:21:59.122 [2024-05-16 09:34:52.613790] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.122 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.382 [2024-05-16 09:34:52.684376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:59.382 [2024-05-16 09:34:52.761625] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.382 [2024-05-16 09:34:52.761666] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.382 [2024-05-16 09:34:52.761677] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.382 [2024-05-16 09:34:52.761683] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.382 [2024-05-16 09:34:52.761689] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.382 [2024-05-16 09:34:52.761838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.382 [2024-05-16 09:34:52.761943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.382 [2024-05-16 09:34:52.762099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.382 [2024-05-16 09:34:52.762098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:59.954 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:59.954 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:21:59.954 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:59.954 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.954 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:59.954 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.955 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:21:59.955 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.955 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:59.955 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.955 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:21:59.955 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.955 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:59.955 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.955 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:59.955 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.955 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:59.955 [2024-05-16 09:34:53.502182] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.955 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.955 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:59.955 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.955 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 Malloc0 00:22:00.216 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.216 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:00.216 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.216 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.216 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:00.216 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.216 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.216 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.216 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.216 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:00.216 [2024-05-16 09:34:53.571111] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:00.216 [2024-05-16 09:34:53.571342] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.216 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.216 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=299324 00:22:00.216 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=299326 00:22:00.216 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:22:00.216 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:22:00.216 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.217 { 00:22:00.217 "params": { 00:22:00.217 "name": "Nvme$subsystem", 00:22:00.217 "trtype": "$TEST_TRANSPORT", 00:22:00.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.217 "adrfam": "ipv4", 00:22:00.217 "trsvcid": "$NVMF_PORT", 00:22:00.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.217 "hdgst": ${hdgst:-false}, 00:22:00.217 "ddgst": ${ddgst:-false} 00:22:00.217 }, 00:22:00.217 "method": "bdev_nvme_attach_controller" 00:22:00.217 } 00:22:00.217 EOF 00:22:00.217 )") 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=299328 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.217 { 00:22:00.217 "params": { 00:22:00.217 "name": "Nvme$subsystem", 00:22:00.217 "trtype": "$TEST_TRANSPORT", 00:22:00.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.217 "adrfam": "ipv4", 00:22:00.217 "trsvcid": "$NVMF_PORT", 00:22:00.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.217 "hdgst": ${hdgst:-false}, 00:22:00.217 "ddgst": ${ddgst:-false} 00:22:00.217 }, 00:22:00.217 "method": "bdev_nvme_attach_controller" 00:22:00.217 } 00:22:00.217 EOF 00:22:00.217 )") 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=299331 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.217 { 00:22:00.217 "params": { 00:22:00.217 "name": "Nvme$subsystem", 00:22:00.217 "trtype": "$TEST_TRANSPORT", 00:22:00.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.217 "adrfam": "ipv4", 00:22:00.217 "trsvcid": "$NVMF_PORT", 00:22:00.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.217 "hdgst": ${hdgst:-false}, 00:22:00.217 "ddgst": ${ddgst:-false} 00:22:00.217 }, 00:22:00.217 "method": "bdev_nvme_attach_controller" 00:22:00.217 } 00:22:00.217 EOF 00:22:00.217 )") 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.217 { 00:22:00.217 "params": { 00:22:00.217 "name": "Nvme$subsystem", 00:22:00.217 "trtype": "$TEST_TRANSPORT", 00:22:00.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.217 "adrfam": "ipv4", 00:22:00.217 "trsvcid": "$NVMF_PORT", 00:22:00.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.217 "hdgst": ${hdgst:-false}, 00:22:00.217 "ddgst": ${ddgst:-false} 00:22:00.217 }, 00:22:00.217 "method": "bdev_nvme_attach_controller" 00:22:00.217 } 00:22:00.217 EOF 00:22:00.217 )") 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 299324 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:00.217 "params": { 00:22:00.217 "name": "Nvme1", 00:22:00.217 "trtype": "tcp", 00:22:00.217 "traddr": "10.0.0.2", 00:22:00.217 "adrfam": "ipv4", 00:22:00.217 "trsvcid": "4420", 00:22:00.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:00.217 "hdgst": false, 00:22:00.217 "ddgst": false 00:22:00.217 }, 00:22:00.217 "method": "bdev_nvme_attach_controller" 00:22:00.217 }' 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:00.217 "params": { 00:22:00.217 "name": "Nvme1", 00:22:00.217 "trtype": "tcp", 00:22:00.217 "traddr": "10.0.0.2", 00:22:00.217 "adrfam": "ipv4", 00:22:00.217 "trsvcid": "4420", 00:22:00.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:00.217 "hdgst": false, 00:22:00.217 "ddgst": false 00:22:00.217 }, 00:22:00.217 "method": "bdev_nvme_attach_controller" 00:22:00.217 }' 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:00.217 "params": { 00:22:00.217 "name": "Nvme1", 00:22:00.217 "trtype": "tcp", 00:22:00.217 "traddr": "10.0.0.2", 00:22:00.217 "adrfam": "ipv4", 00:22:00.217 "trsvcid": "4420", 00:22:00.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:00.217 "hdgst": false, 00:22:00.217 "ddgst": false 00:22:00.217 }, 00:22:00.217 "method": "bdev_nvme_attach_controller" 00:22:00.217 }' 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:22:00.217 09:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:00.217 "params": { 00:22:00.217 "name": "Nvme1", 00:22:00.217 "trtype": "tcp", 00:22:00.217 "traddr": "10.0.0.2", 00:22:00.217 "adrfam": "ipv4", 00:22:00.217 "trsvcid": "4420", 00:22:00.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:00.217 "hdgst": false, 00:22:00.217 "ddgst": false 00:22:00.217 }, 00:22:00.217 "method": "bdev_nvme_attach_controller" 00:22:00.217 }' 00:22:00.217 [2024-05-16 09:34:53.623717] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:22:00.217 [2024-05-16 09:34:53.623767] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:00.217 [2024-05-16 09:34:53.623954] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:22:00.217 [2024-05-16 09:34:53.623998] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:22:00.217 [2024-05-16 09:34:53.625912] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:22:00.217 [2024-05-16 09:34:53.625961] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:22:00.217 [2024-05-16 09:34:53.628225] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:22:00.217 [2024-05-16 09:34:53.628271] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:22:00.217 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.217 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.217 [2024-05-16 09:34:53.771224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.479 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.479 [2024-05-16 09:34:53.821940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:00.479 [2024-05-16 09:34:53.832139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.479 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.479 [2024-05-16 09:34:53.861308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.479 [2024-05-16 09:34:53.884676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:00.479 [2024-05-16 09:34:53.911993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:00.479 [2024-05-16 09:34:53.947110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.479 [2024-05-16 09:34:53.995163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:22:00.479 Running I/O for 1 seconds... 00:22:00.739 Running I/O for 1 seconds... 00:22:00.739 Running I/O for 1 seconds... 00:22:00.739 Running I/O for 1 seconds... 00:22:01.709 00:22:01.709 Latency(us) 00:22:01.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.709 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:22:01.709 Nvme1n1 : 1.01 8436.54 32.96 0.00 0.00 15055.87 5816.32 24466.77 00:22:01.709 =================================================================================================================== 00:22:01.709 Total : 8436.54 32.96 0.00 0.00 15055.87 5816.32 24466.77 00:22:01.709 00:22:01.709 Latency(us) 00:22:01.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.709 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:22:01.709 Nvme1n1 : 1.01 7978.27 31.17 0.00 0.00 15991.08 5925.55 35170.99 00:22:01.709 =================================================================================================================== 00:22:01.709 Total : 7978.27 31.17 0.00 0.00 15991.08 5925.55 35170.99 00:22:01.709 00:22:01.709 Latency(us) 00:22:01.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.709 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:22:01.709 Nvme1n1 : 1.00 188141.59 734.93 0.00 0.00 677.10 271.36 754.35 00:22:01.709 =================================================================================================================== 00:22:01.709 Total : 188141.59 734.93 0.00 0.00 677.10 271.36 754.35 00:22:01.709 00:22:01.709 Latency(us) 00:22:01.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.709 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:22:01.709 Nvme1n1 : 1.00 14459.30 56.48 0.00 0.00 8828.20 4696.75 20753.07 00:22:01.709 =================================================================================================================== 00:22:01.709 Total : 14459.30 56.48 0.00 0.00 8828.20 4696.75 20753.07 00:22:01.709 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 299326 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 299328 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 299331 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:01.969 rmmod nvme_tcp 00:22:01.969 rmmod nvme_fabrics 00:22:01.969 rmmod nvme_keyring 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 298974 ']' 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 298974 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 298974 ']' 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 298974 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 298974 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 298974' 00:22:01.969 killing process with pid 298974 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 298974 00:22:01.969 [2024-05-16 09:34:55.496407] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:01.969 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 298974 00:22:02.229 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:02.230 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:02.230 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:02.230 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:02.230 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:02.230 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.230 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.230 09:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.144 09:34:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:04.144 00:22:04.144 real 0m12.356s 00:22:04.144 user 0m18.949s 00:22:04.144 sys 0m6.544s 00:22:04.144 09:34:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:04.144 09:34:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:22:04.144 ************************************ 00:22:04.144 END TEST nvmf_bdev_io_wait 00:22:04.144 ************************************ 00:22:04.407 09:34:57 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:22:04.407 09:34:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:04.407 09:34:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:04.407 09:34:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:04.407 ************************************ 00:22:04.407 START TEST nvmf_queue_depth 00:22:04.407 ************************************ 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:22:04.407 * Looking for test storage... 00:22:04.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:22:04.407 09:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:11.002 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:11.002 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:11.002 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:11.002 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:11.002 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.003 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.003 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:11.003 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:11.003 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.003 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.003 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.003 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.003 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:11.003 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:11.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:22:11.265 00:22:11.265 --- 10.0.0.2 ping statistics --- 00:22:11.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.265 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:22:11.265 00:22:11.265 --- 10.0.0.1 ping statistics --- 00:22:11.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.265 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=303683 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 303683 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 303683 ']' 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:11.265 09:35:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:11.265 [2024-05-16 09:35:04.706683] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:22:11.265 [2024-05-16 09:35:04.706748] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.265 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.265 [2024-05-16 09:35:04.795032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.528 [2024-05-16 09:35:04.887069] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.528 [2024-05-16 09:35:04.887126] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.528 [2024-05-16 09:35:04.887135] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.528 [2024-05-16 09:35:04.887143] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.528 [2024-05-16 09:35:04.887149] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.528 [2024-05-16 09:35:04.887182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.101 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:12.101 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:22:12.101 09:35:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:12.101 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.101 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:12.101 09:35:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.101 09:35:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.101 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.101 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:12.101 [2024-05-16 09:35:05.537708] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.101 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.101 09:35:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:12.101 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.101 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:12.101 Malloc0 00:22:12.101 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.101 09:35:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:12.101 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.101 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:12.101 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.102 09:35:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:12.102 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.102 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:12.102 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.102 09:35:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.102 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.102 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:12.102 [2024-05-16 09:35:05.611653] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:12.102 [2024-05-16 09:35:05.611934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.102 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.102 09:35:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=303969 00:22:12.102 09:35:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:12.102 09:35:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:22:12.102 09:35:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 303969 /var/tmp/bdevperf.sock 00:22:12.102 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 303969 ']' 00:22:12.102 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.102 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:12.102 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.102 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:12.102 09:35:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:12.364 [2024-05-16 09:35:05.666435] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:22:12.364 [2024-05-16 09:35:05.666496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid303969 ] 00:22:12.364 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.364 [2024-05-16 09:35:05.730309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.364 [2024-05-16 09:35:05.804237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.937 09:35:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:12.937 09:35:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:22:12.937 09:35:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:12.937 09:35:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.937 09:35:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:13.199 NVMe0n1 00:22:13.199 09:35:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.199 09:35:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:13.199 Running I/O for 10 seconds... 00:22:23.205 00:22:23.205 Latency(us) 00:22:23.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.205 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:22:23.205 Verification LBA range: start 0x0 length 0x4000 00:22:23.205 NVMe0n1 : 10.08 11482.09 44.85 0.00 0.00 88862.94 25231.36 75147.95 00:22:23.205 =================================================================================================================== 00:22:23.205 Total : 11482.09 44.85 0.00 0.00 88862.94 25231.36 75147.95 00:22:23.205 0 00:22:23.205 09:35:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 303969 00:22:23.205 09:35:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 303969 ']' 00:22:23.205 09:35:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 303969 00:22:23.205 09:35:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:22:23.205 09:35:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:23.205 09:35:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 303969 00:22:23.465 09:35:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:23.465 09:35:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:23.465 09:35:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 303969' 00:22:23.465 killing process with pid 303969 00:22:23.465 09:35:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 303969 00:22:23.465 Received shutdown signal, test time was about 10.000000 seconds 00:22:23.465 00:22:23.465 Latency(us) 00:22:23.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.465 =================================================================================================================== 00:22:23.465 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.465 09:35:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 303969 00:22:23.465 09:35:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:22:23.465 09:35:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:22:23.465 09:35:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:23.465 09:35:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:22:23.465 09:35:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:23.465 09:35:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:22:23.465 09:35:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:23.465 09:35:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:23.465 rmmod nvme_tcp 00:22:23.465 rmmod nvme_fabrics 00:22:23.465 rmmod nvme_keyring 00:22:23.465 09:35:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:23.466 09:35:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:22:23.466 09:35:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:22:23.466 09:35:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 303683 ']' 00:22:23.466 09:35:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 303683 00:22:23.466 09:35:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 303683 ']' 00:22:23.466 09:35:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 303683 00:22:23.466 09:35:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:22:23.466 09:35:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:23.466 09:35:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 303683 00:22:23.727 09:35:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:23.727 09:35:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:23.727 09:35:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 303683' 00:22:23.727 killing process with pid 303683 00:22:23.727 09:35:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 303683 00:22:23.727 [2024-05-16 09:35:17.027416] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:23.727 09:35:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 303683 00:22:23.727 09:35:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:23.727 09:35:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:23.727 09:35:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:23.727 09:35:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.727 09:35:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:23.727 09:35:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.727 09:35:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.727 09:35:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.274 09:35:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:26.274 00:22:26.274 real 0m21.467s 00:22:26.274 user 0m25.320s 00:22:26.274 sys 0m6.140s 00:22:26.274 09:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:26.274 09:35:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:22:26.274 ************************************ 00:22:26.274 END TEST nvmf_queue_depth 00:22:26.274 ************************************ 00:22:26.274 09:35:19 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:22:26.274 09:35:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:26.274 09:35:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:26.274 09:35:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:26.274 ************************************ 00:22:26.274 START TEST nvmf_target_multipath 00:22:26.274 ************************************ 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:22:26.274 * Looking for test storage... 00:22:26.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:26.274 09:35:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:26.275 09:35:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:26.275 09:35:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:26.275 09:35:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:26.275 09:35:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:22:26.275 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:26.275 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.275 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:26.275 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:26.275 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:26.275 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.275 09:35:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.275 09:35:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.275 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:26.275 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:26.275 09:35:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:22:26.275 09:35:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:32.865 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.865 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:22:32.865 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:32.865 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:32.865 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:32.865 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:32.865 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:32.865 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:22:32.865 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:32.865 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:22:32.865 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:22:32.865 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:22:32.865 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:22:32.865 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:22:32.865 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:22:32.865 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:32.866 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:32.866 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:32.866 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:32.866 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:32.866 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:33.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:22:33.136 00:22:33.136 --- 10.0.0.2 ping statistics --- 00:22:33.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.136 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:22:33.136 00:22:33.136 --- 10.0.0.1 ping statistics --- 00:22:33.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.136 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:22:33.136 only one NIC for nvmf test 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:33.136 rmmod nvme_tcp 00:22:33.136 rmmod nvme_fabrics 00:22:33.136 rmmod nvme_keyring 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.136 09:35:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.679 09:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:35.680 00:22:35.680 real 0m9.417s 00:22:35.680 user 0m2.022s 00:22:35.680 sys 0m5.295s 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:35.680 09:35:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:35.680 ************************************ 00:22:35.680 END TEST nvmf_target_multipath 00:22:35.680 ************************************ 00:22:35.680 09:35:28 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:22:35.680 09:35:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:35.680 09:35:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:35.680 09:35:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:35.680 ************************************ 00:22:35.680 START TEST nvmf_zcopy 00:22:35.680 ************************************ 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:22:35.680 * Looking for test storage... 00:22:35.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:22:35.680 09:35:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:42.270 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.270 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:22:42.270 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:42.270 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:42.270 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:42.270 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:42.271 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:42.271 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:42.271 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:42.271 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:42.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:22:42.271 00:22:42.271 --- 10.0.0.2 ping statistics --- 00:22:42.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.271 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:22:42.271 00:22:42.271 --- 10.0.0.1 ping statistics --- 00:22:42.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.271 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:42.271 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:42.532 09:35:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:22:42.532 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:42.532 09:35:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:42.532 09:35:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:42.532 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=314363 00:22:42.532 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 314363 00:22:42.532 09:35:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:42.532 09:35:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 314363 ']' 00:22:42.532 09:35:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.532 09:35:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:42.532 09:35:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.532 09:35:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:42.532 09:35:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:42.532 [2024-05-16 09:35:35.922936] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:22:42.532 [2024-05-16 09:35:35.922984] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.532 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.532 [2024-05-16 09:35:36.002698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.532 [2024-05-16 09:35:36.065416] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.532 [2024-05-16 09:35:36.065452] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.532 [2024-05-16 09:35:36.065459] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.532 [2024-05-16 09:35:36.065466] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.532 [2024-05-16 09:35:36.065472] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.532 [2024-05-16 09:35:36.065490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.476 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:43.476 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:22:43.476 09:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:43.476 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:43.476 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:43.476 09:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:43.477 [2024-05-16 09:35:36.748078] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:43.477 [2024-05-16 09:35:36.764047] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:43.477 [2024-05-16 09:35:36.764356] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:43.477 malloc0 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:43.477 { 00:22:43.477 "params": { 00:22:43.477 "name": "Nvme$subsystem", 00:22:43.477 "trtype": "$TEST_TRANSPORT", 00:22:43.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.477 "adrfam": "ipv4", 00:22:43.477 "trsvcid": "$NVMF_PORT", 00:22:43.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.477 "hdgst": ${hdgst:-false}, 00:22:43.477 "ddgst": ${ddgst:-false} 00:22:43.477 }, 00:22:43.477 "method": "bdev_nvme_attach_controller" 00:22:43.477 } 00:22:43.477 EOF 00:22:43.477 )") 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:22:43.477 09:35:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:43.477 "params": { 00:22:43.477 "name": "Nvme1", 00:22:43.477 "trtype": "tcp", 00:22:43.477 "traddr": "10.0.0.2", 00:22:43.477 "adrfam": "ipv4", 00:22:43.477 "trsvcid": "4420", 00:22:43.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.477 "hdgst": false, 00:22:43.477 "ddgst": false 00:22:43.477 }, 00:22:43.477 "method": "bdev_nvme_attach_controller" 00:22:43.477 }' 00:22:43.477 [2024-05-16 09:35:36.851488] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:22:43.477 [2024-05-16 09:35:36.851552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid314579 ] 00:22:43.477 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.477 [2024-05-16 09:35:36.915118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.477 [2024-05-16 09:35:36.989348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.739 Running I/O for 10 seconds... 00:22:53.751 00:22:53.751 Latency(us) 00:22:53.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.751 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:22:53.751 Verification LBA range: start 0x0 length 0x1000 00:22:53.751 Nvme1n1 : 10.01 8568.68 66.94 0.00 0.00 14884.51 1884.16 29491.20 00:22:53.751 =================================================================================================================== 00:22:53.751 Total : 8568.68 66.94 0.00 0.00 14884.51 1884.16 29491.20 00:22:53.751 09:35:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=316692 00:22:53.751 09:35:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:22:53.751 09:35:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:53.751 09:35:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:22:53.751 09:35:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:22:53.751 09:35:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:22:53.751 09:35:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:22:53.751 09:35:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:53.751 09:35:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:53.751 { 00:22:53.751 "params": { 00:22:53.751 "name": "Nvme$subsystem", 00:22:53.751 "trtype": "$TEST_TRANSPORT", 00:22:53.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.751 "adrfam": "ipv4", 00:22:53.751 "trsvcid": "$NVMF_PORT", 00:22:53.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.751 "hdgst": ${hdgst:-false}, 00:22:53.751 "ddgst": ${ddgst:-false} 00:22:53.751 }, 00:22:53.751 "method": "bdev_nvme_attach_controller" 00:22:53.751 } 00:22:53.751 EOF 00:22:53.751 )") 00:22:53.751 [2024-05-16 09:35:47.301634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.751 [2024-05-16 09:35:47.301665] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:53.751 09:35:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:22:53.751 09:35:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:22:53.751 [2024-05-16 09:35:47.309614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:53.751 [2024-05-16 09:35:47.309624] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.012 09:35:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:22:54.013 09:35:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:54.013 "params": { 00:22:54.013 "name": "Nvme1", 00:22:54.013 "trtype": "tcp", 00:22:54.013 "traddr": "10.0.0.2", 00:22:54.013 "adrfam": "ipv4", 00:22:54.013 "trsvcid": "4420", 00:22:54.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.013 "hdgst": false, 00:22:54.013 "ddgst": false 00:22:54.013 }, 00:22:54.013 "method": "bdev_nvme_attach_controller" 00:22:54.013 }' 00:22:54.013 [2024-05-16 09:35:47.317634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.317644] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.325655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.325663] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.333674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.333682] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.341694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.341703] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.343531] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:22:54.013 [2024-05-16 09:35:47.343577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316692 ] 00:22:54.013 [2024-05-16 09:35:47.349715] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.349723] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.357735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.357744] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.365754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.365762] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.013 [2024-05-16 09:35:47.373775] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.373783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.381795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.381802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.389816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.389824] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.397837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.397844] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.401584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.013 [2024-05-16 09:35:47.405857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.405865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.413879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.413888] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.421899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.421908] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.429920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.429929] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.437941] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.437952] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.445962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.445970] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.453983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.453994] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.462005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.462013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.465379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.013 [2024-05-16 09:35:47.470024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.470031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.478047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.478060] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.486072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.486086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.494089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.494097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.502105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.502113] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.510125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.510133] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.518144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.518151] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.526166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.526174] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.534187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.534194] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.542227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.542243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.550232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.550243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.558251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.558259] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.013 [2024-05-16 09:35:47.566272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.013 [2024-05-16 09:35:47.566281] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.574291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.574301] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.582313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.582323] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.590331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.590339] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.598940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.598953] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.606376] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.606385] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 Running I/O for 5 seconds... 00:22:54.275 [2024-05-16 09:35:47.614394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.614402] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.625131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.625147] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.633697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.633712] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.642805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.642821] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.651539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.651554] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.660385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.660401] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.669587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.669602] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.677919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.677934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.686409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.686424] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.695036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.695057] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.704182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.704197] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.712680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.712695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.721336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.721351] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.730600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.730615] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.739514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.739529] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.748362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.748377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.756605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.756621] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.765510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.765525] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.774496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.774511] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.783581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.783596] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.791976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.791991] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.800432] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.800446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.809254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.809269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.817669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.817684] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.275 [2024-05-16 09:35:47.826507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.275 [2024-05-16 09:35:47.826521] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.835330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.835345] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.844530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.844545] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.853575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.853590] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.862093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.862107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.871002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.871017] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.880138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.880153] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.888877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.888892] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.897380] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.897395] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.906189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.906203] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.915434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.915449] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.923273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.923288] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.932216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.932234] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.941320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.941334] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.949932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.949947] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.958629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.958644] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.967772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.967787] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.976559] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.976573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.984958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.984973] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:47.993960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:47.993975] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:48.003146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:48.003161] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:48.011987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:48.012001] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:48.021146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:48.021160] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:48.029930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:48.029944] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:48.038658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:48.038672] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:48.047857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:48.047872] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:48.056618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:48.056632] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:48.065590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:48.065604] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:48.073876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:48.073891] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:48.082496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:48.082511] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.537 [2024-05-16 09:35:48.091413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.537 [2024-05-16 09:35:48.091427] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.798 [2024-05-16 09:35:48.100762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.798 [2024-05-16 09:35:48.100779] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.798 [2024-05-16 09:35:48.109289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.798 [2024-05-16 09:35:48.109303] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.798 [2024-05-16 09:35:48.118156] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.798 [2024-05-16 09:35:48.118170] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.798 [2024-05-16 09:35:48.126515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.798 [2024-05-16 09:35:48.126529] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.798 [2024-05-16 09:35:48.134787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.798 [2024-05-16 09:35:48.134802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.798 [2024-05-16 09:35:48.143579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.798 [2024-05-16 09:35:48.143594] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.152719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.152734] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.161163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.161177] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.169706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.169721] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.178644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.178658] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.187362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.187376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.196161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.196175] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.205342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.205357] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.214180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.214195] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.222899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.222914] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.231587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.231602] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.240514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.240529] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.249506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.249521] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.258350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.258365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.266717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.266735] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.275355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.275369] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.283753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.283767] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.292570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.292585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.301477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.301492] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.310411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.310425] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.323679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.323694] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.336902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.336917] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:54.799 [2024-05-16 09:35:48.350290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:54.799 [2024-05-16 09:35:48.350305] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.362841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.362857] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.375295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.375311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.388431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.388447] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.401008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.401024] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.414302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.414318] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.427588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.427604] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.440120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.440135] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.453641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.453656] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.467006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.467022] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.480008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.480023] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.493346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.493364] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.506868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.506883] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.519304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.519320] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.532641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.532656] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.545458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.545474] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.558688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.558703] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.571557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.571573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.584889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.584904] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.597913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.597929] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.060 [2024-05-16 09:35:48.610693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.060 [2024-05-16 09:35:48.610708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.623497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.623513] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.636476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.636492] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.649717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.649733] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.662918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.662933] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.676182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.676197] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.689520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.689535] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.702335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.702350] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.715173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.715189] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.728744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.728759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.741915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.741931] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.754826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.754840] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.768256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.768271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.781449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.781464] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.793713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.793728] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.806514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.806529] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.819433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.819449] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.832652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.832668] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.846152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.846169] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.858733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.858749] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.321 [2024-05-16 09:35:48.872176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.321 [2024-05-16 09:35:48.872191] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:48.885237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:48.885252] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:48.898758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:48.898773] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:48.911787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:48.911801] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:48.924929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:48.924945] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:48.938091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:48.938107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:48.950938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:48.950953] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:48.964320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:48.964337] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:48.976664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:48.976680] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:48.989399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:48.989415] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:49.001817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:49.001832] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:49.015010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:49.015025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:49.028350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:49.028366] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:49.041897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:49.041913] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:49.055307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:49.055322] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:49.068352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:49.068367] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:49.081058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:49.081073] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:49.094458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:49.094472] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:49.107465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:49.107479] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:49.120271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:49.120286] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.582 [2024-05-16 09:35:49.133978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.582 [2024-05-16 09:35:49.133992] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.147332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.147347] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.160298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.160313] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.173520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.173535] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.186919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.186934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.199749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.199764] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.213119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.213135] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.226388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.226404] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.239424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.239439] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.252038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.252056] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.264333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.264347] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.277409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.277424] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.290475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.290490] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.303690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.303705] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.316253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.316268] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.328844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.328860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.341720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.341735] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.354690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.354706] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.368176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.368191] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.381150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.381166] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:55.843 [2024-05-16 09:35:49.394392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:55.843 [2024-05-16 09:35:49.394407] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.407675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.407691] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.420881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.420896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.434011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.434026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.447251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.447267] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.460496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.460512] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.472880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.472896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.486356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.486371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.499157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.499173] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.512151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.512166] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.525273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.525288] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.538211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.538226] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.550909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.550925] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.563682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.563697] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.576255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.576270] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.589731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.589747] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.602224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.602240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.614851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.614866] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.627541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.627555] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.640245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.104 [2024-05-16 09:35:49.640260] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.104 [2024-05-16 09:35:49.653758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.105 [2024-05-16 09:35:49.653772] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.666562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.666577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.679814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.679829] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.692886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.692902] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.706087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.706102] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.719249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.719267] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.732026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.732041] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.744846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.744861] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.758266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.758281] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.771465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.771480] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.784891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.784905] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.798155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.798169] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.810694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.810708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.824059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.824075] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.837019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.837034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.849780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.849795] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.862975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.862989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.876424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.876439] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.889502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.889517] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.903153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.903169] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.365 [2024-05-16 09:35:49.915608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.365 [2024-05-16 09:35:49.915623] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:49.928869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:49.928885] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:49.941971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:49.941986] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:49.954623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:49.954638] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:49.967159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:49.967178] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:49.980368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:49.980384] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:49.993723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:49.993739] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:50.006682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:50.006699] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:50.019360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:50.019375] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:50.032321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:50.032336] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:50.045450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:50.045465] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:50.058743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:50.058758] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:50.071579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:50.071595] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:50.084242] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:50.084258] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:50.096969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:50.096985] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:50.109732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:50.109748] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:50.122399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:50.122415] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:50.135005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:50.135021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:50.147483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:50.147498] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:50.159906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:50.159922] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.626 [2024-05-16 09:35:50.173453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.626 [2024-05-16 09:35:50.173469] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.186277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.186293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.199091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.199106] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.211926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.211945] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.224854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.224869] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.238420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.238436] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.251733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.251749] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.264943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.264959] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.277999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.278015] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.290760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.290775] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.304096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.304112] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.317401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.317416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.329824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.329839] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.342083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.342099] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.355361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.355376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.368579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.368594] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.381029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.381045] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.394210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.394226] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.407426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.407441] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.420555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.420571] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.433554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.433569] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:56.887 [2024-05-16 09:35:50.446832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:56.887 [2024-05-16 09:35:50.446847] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.150 [2024-05-16 09:35:50.459153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.150 [2024-05-16 09:35:50.459176] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.150 [2024-05-16 09:35:50.471558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.150 [2024-05-16 09:35:50.471574] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.150 [2024-05-16 09:35:50.484997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.150 [2024-05-16 09:35:50.485013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.150 [2024-05-16 09:35:50.498311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.150 [2024-05-16 09:35:50.498327] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.150 [2024-05-16 09:35:50.510654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.150 [2024-05-16 09:35:50.510669] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.150 [2024-05-16 09:35:50.523987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.150 [2024-05-16 09:35:50.524003] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.151 [2024-05-16 09:35:50.536464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.151 [2024-05-16 09:35:50.536479] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.151 [2024-05-16 09:35:50.549673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.151 [2024-05-16 09:35:50.549688] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.151 [2024-05-16 09:35:50.563008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.151 [2024-05-16 09:35:50.563023] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.151 [2024-05-16 09:35:50.575277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.151 [2024-05-16 09:35:50.575293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.151 [2024-05-16 09:35:50.588256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.151 [2024-05-16 09:35:50.588271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.151 [2024-05-16 09:35:50.601498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.151 [2024-05-16 09:35:50.601514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.151 [2024-05-16 09:35:50.614451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.151 [2024-05-16 09:35:50.614467] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.151 [2024-05-16 09:35:50.627418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.151 [2024-05-16 09:35:50.627433] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.151 [2024-05-16 09:35:50.639813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.151 [2024-05-16 09:35:50.639829] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.151 [2024-05-16 09:35:50.652291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.151 [2024-05-16 09:35:50.652306] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.151 [2024-05-16 09:35:50.664911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.151 [2024-05-16 09:35:50.664926] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.151 [2024-05-16 09:35:50.677815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.151 [2024-05-16 09:35:50.677831] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.151 [2024-05-16 09:35:50.691045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.151 [2024-05-16 09:35:50.691065] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.151 [2024-05-16 09:35:50.704124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.151 [2024-05-16 09:35:50.704139] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.412 [2024-05-16 09:35:50.716862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.412 [2024-05-16 09:35:50.716879] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.412 [2024-05-16 09:35:50.729862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.412 [2024-05-16 09:35:50.729878] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.412 [2024-05-16 09:35:50.743085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.412 [2024-05-16 09:35:50.743101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.412 [2024-05-16 09:35:50.756259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.412 [2024-05-16 09:35:50.756273] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.412 [2024-05-16 09:35:50.769663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.412 [2024-05-16 09:35:50.769679] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.412 [2024-05-16 09:35:50.782442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.412 [2024-05-16 09:35:50.782457] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.412 [2024-05-16 09:35:50.795279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.412 [2024-05-16 09:35:50.795294] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.412 [2024-05-16 09:35:50.808657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.413 [2024-05-16 09:35:50.808671] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.413 [2024-05-16 09:35:50.821244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.413 [2024-05-16 09:35:50.821259] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.413 [2024-05-16 09:35:50.834208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.413 [2024-05-16 09:35:50.834223] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.413 [2024-05-16 09:35:50.847163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.413 [2024-05-16 09:35:50.847177] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.413 [2024-05-16 09:35:50.860337] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.413 [2024-05-16 09:35:50.860352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.413 [2024-05-16 09:35:50.873739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.413 [2024-05-16 09:35:50.873753] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.413 [2024-05-16 09:35:50.887131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.413 [2024-05-16 09:35:50.887146] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.413 [2024-05-16 09:35:50.900464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.413 [2024-05-16 09:35:50.900479] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.413 [2024-05-16 09:35:50.913660] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.413 [2024-05-16 09:35:50.913674] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.413 [2024-05-16 09:35:50.926939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.413 [2024-05-16 09:35:50.926954] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.413 [2024-05-16 09:35:50.940279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.413 [2024-05-16 09:35:50.940294] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.413 [2024-05-16 09:35:50.953531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.413 [2024-05-16 09:35:50.953546] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.413 [2024-05-16 09:35:50.966259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.413 [2024-05-16 09:35:50.966274] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:50.978570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:50.978586] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:50.991651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:50.991666] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:51.004891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:51.004906] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:51.018153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:51.018168] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:51.030990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:51.031005] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:51.043386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:51.043400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:51.055963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:51.055978] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:51.068849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:51.068865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:51.081887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:51.081902] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:51.095319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:51.095334] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:51.108319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:51.108334] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:51.121760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:51.121775] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:51.135145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:51.135160] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:51.147741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:51.147756] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:51.160203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:51.160219] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:51.172889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:51.172904] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:51.185672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:51.185687] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:51.198614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:51.198629] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:51.212083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:51.212098] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.674 [2024-05-16 09:35:51.225672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.674 [2024-05-16 09:35:51.225687] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.238862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.238877] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.252007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.252023] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.265551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.265566] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.278489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.278504] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.290869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.290884] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.303675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.303690] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.316936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.316950] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.329551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.329566] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.342720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.342735] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.355268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.355283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.368127] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.368142] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.381091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.381106] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.394050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.394068] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.407582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.407597] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.420584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.420599] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.433231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.433246] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.446414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.446430] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.458838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.458854] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.472311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.935 [2024-05-16 09:35:51.472327] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:57.935 [2024-05-16 09:35:51.485843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:57.936 [2024-05-16 09:35:51.485859] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.197 [2024-05-16 09:35:51.498670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.197 [2024-05-16 09:35:51.498686] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.197 [2024-05-16 09:35:51.512038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.197 [2024-05-16 09:35:51.512058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.197 [2024-05-16 09:35:51.525075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.197 [2024-05-16 09:35:51.525090] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.197 [2024-05-16 09:35:51.538353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.197 [2024-05-16 09:35:51.538369] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.197 [2024-05-16 09:35:51.551757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.197 [2024-05-16 09:35:51.551772] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.197 [2024-05-16 09:35:51.564369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.197 [2024-05-16 09:35:51.564385] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.197 [2024-05-16 09:35:51.577349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.197 [2024-05-16 09:35:51.577364] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.197 [2024-05-16 09:35:51.590547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.197 [2024-05-16 09:35:51.590562] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.197 [2024-05-16 09:35:51.603920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.197 [2024-05-16 09:35:51.603935] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.197 [2024-05-16 09:35:51.616723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.197 [2024-05-16 09:35:51.616739] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.197 [2024-05-16 09:35:51.629861] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.197 [2024-05-16 09:35:51.629876] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.197 [2024-05-16 09:35:51.643287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.197 [2024-05-16 09:35:51.643302] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.197 [2024-05-16 09:35:51.656363] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.197 [2024-05-16 09:35:51.656378] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.197 [2024-05-16 09:35:51.669009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.197 [2024-05-16 09:35:51.669023] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.197 [2024-05-16 09:35:51.682557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.197 [2024-05-16 09:35:51.682576] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.197 [2024-05-16 09:35:51.695880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.197 [2024-05-16 09:35:51.695895] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.197 [2024-05-16 09:35:51.709241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.197 [2024-05-16 09:35:51.709257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.197 [2024-05-16 09:35:51.722527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.197 [2024-05-16 09:35:51.722542] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.198 [2024-05-16 09:35:51.735812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.198 [2024-05-16 09:35:51.735827] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.198 [2024-05-16 09:35:51.749278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.198 [2024-05-16 09:35:51.749293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.762226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.762242] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.775015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.775031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.788312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.788328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.801438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.801453] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.814071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.814088] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.826825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.826841] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.839878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.839893] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.853518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.853533] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.866604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.866620] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.879592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.879608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.892057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.892072] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.905119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.905134] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.918360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.918376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.931718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.931737] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.944822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.944838] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.958291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.958307] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.971513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.971528] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.984416] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.984431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:51.997720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:51.997735] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.459 [2024-05-16 09:35:52.010834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.459 [2024-05-16 09:35:52.010849] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.024290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.024305] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.037394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.037409] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.050010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.050026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.062960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.062975] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.076227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.076243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.089136] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.089151] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.102196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.102212] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.115005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.115021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.127732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.127747] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.140293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.140310] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.153439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.153454] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.166473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.166488] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.179849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.179869] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.193005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.193021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.206121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.206137] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.218662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.218678] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.231731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.231746] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.244775] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.244790] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.257920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.257935] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.721 [2024-05-16 09:35:52.271064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.721 [2024-05-16 09:35:52.271080] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.982 [2024-05-16 09:35:52.284028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.982 [2024-05-16 09:35:52.284044] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.297190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.297205] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.310378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.310394] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.323654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.323670] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.336463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.336479] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.349316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.349331] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.361741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.361757] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.374446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.374462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.387297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.387313] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.400473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.400489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.413184] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.413200] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.426358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.426377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.439848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.439863] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.453452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.453468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.466463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.466479] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.479696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.479711] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.492875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.492890] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.506074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.506090] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.519614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.519630] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:58.983 [2024-05-16 09:35:52.532644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:58.983 [2024-05-16 09:35:52.532659] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 [2024-05-16 09:35:52.545790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:59.244 [2024-05-16 09:35:52.545806] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 [2024-05-16 09:35:52.558499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:59.244 [2024-05-16 09:35:52.558514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 [2024-05-16 09:35:52.571143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:59.244 [2024-05-16 09:35:52.571158] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 [2024-05-16 09:35:52.583903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:59.244 [2024-05-16 09:35:52.583917] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 [2024-05-16 09:35:52.596625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:59.244 [2024-05-16 09:35:52.596640] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 [2024-05-16 09:35:52.609606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:59.244 [2024-05-16 09:35:52.609620] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 [2024-05-16 09:35:52.622670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:59.244 [2024-05-16 09:35:52.622686] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 00:22:59.244 Latency(us) 00:22:59.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.244 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:22:59.244 Nvme1n1 : 5.01 19563.12 152.84 0.00 0.00 6536.52 2908.16 18350.08 00:22:59.244 =================================================================================================================== 00:22:59.244 Total : 19563.12 152.84 0.00 0.00 6536.52 2908.16 18350.08 00:22:59.244 [2024-05-16 09:35:52.632329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:59.244 [2024-05-16 09:35:52.632344] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 [2024-05-16 09:35:52.644360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:59.244 [2024-05-16 09:35:52.644371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 [2024-05-16 09:35:52.664417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:59.244 [2024-05-16 09:35:52.664433] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 [2024-05-16 09:35:52.676440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:59.244 [2024-05-16 09:35:52.676452] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 [2024-05-16 09:35:52.688469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:59.244 [2024-05-16 09:35:52.688479] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 [2024-05-16 09:35:52.700495] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:59.244 [2024-05-16 09:35:52.700504] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 [2024-05-16 09:35:52.712527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:59.244 [2024-05-16 09:35:52.712534] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 [2024-05-16 09:35:52.724558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:59.244 [2024-05-16 09:35:52.724568] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 [2024-05-16 09:35:52.736589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:59.244 [2024-05-16 09:35:52.736598] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 [2024-05-16 09:35:52.748618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:59.244 [2024-05-16 09:35:52.748627] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 [2024-05-16 09:35:52.760648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:59.244 [2024-05-16 09:35:52.760656] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (316692) - No such process 00:22:59.244 09:35:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 316692 00:22:59.244 09:35:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:59.244 09:35:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.244 09:35:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:59.244 09:35:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.245 09:35:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:22:59.245 09:35:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.245 09:35:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:59.245 delay0 00:22:59.245 09:35:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.245 09:35:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:22:59.245 09:35:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.245 09:35:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:59.245 09:35:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.245 09:35:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:22:59.505 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.506 [2024-05-16 09:35:52.941390] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:23:06.089 Initializing NVMe Controllers 00:23:06.089 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.089 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:06.089 Initialization complete. Launching workers. 00:23:06.089 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 143 00:23:06.089 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 430, failed to submit 33 00:23:06.089 success 244, unsuccess 186, failed 0 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:06.089 rmmod nvme_tcp 00:23:06.089 rmmod nvme_fabrics 00:23:06.089 rmmod nvme_keyring 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 314363 ']' 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 314363 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 314363 ']' 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 314363 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 314363 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 314363' 00:23:06.089 killing process with pid 314363 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 314363 00:23:06.089 [2024-05-16 09:35:59.306194] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 314363 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.089 09:35:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.006 09:36:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:08.006 00:23:08.006 real 0m32.695s 00:23:08.006 user 0m45.476s 00:23:08.006 sys 0m8.961s 00:23:08.006 09:36:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:08.006 09:36:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:23:08.006 ************************************ 00:23:08.006 END TEST nvmf_zcopy 00:23:08.006 ************************************ 00:23:08.006 09:36:01 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:23:08.006 09:36:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:08.006 09:36:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:08.006 09:36:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:08.267 ************************************ 00:23:08.267 START TEST nvmf_nmic 00:23:08.267 ************************************ 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:23:08.267 * Looking for test storage... 00:23:08.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:08.267 09:36:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:23:08.268 09:36:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:16.418 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:16.418 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:16.418 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:16.418 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:16.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.743 ms 00:23:16.418 00:23:16.418 --- 10.0.0.2 ping statistics --- 00:23:16.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.418 rtt min/avg/max/mdev = 0.743/0.743/0.743/0.000 ms 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:23:16.418 00:23:16.418 --- 10.0.0.1 ping statistics --- 00:23:16.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.418 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=323621 00:23:16.418 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 323621 00:23:16.419 09:36:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:16.419 09:36:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 323621 ']' 00:23:16.419 09:36:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.419 09:36:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:16.419 09:36:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.419 09:36:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:16.419 09:36:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:16.419 [2024-05-16 09:36:08.964500] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:23:16.419 [2024-05-16 09:36:08.964575] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.419 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.419 [2024-05-16 09:36:09.040864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:16.419 [2024-05-16 09:36:09.116980] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.419 [2024-05-16 09:36:09.117016] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.419 [2024-05-16 09:36:09.117024] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.419 [2024-05-16 09:36:09.117030] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.419 [2024-05-16 09:36:09.117036] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.419 [2024-05-16 09:36:09.117114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.419 [2024-05-16 09:36:09.117246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.419 [2024-05-16 09:36:09.117388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.419 [2024-05-16 09:36:09.117390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:16.419 [2024-05-16 09:36:09.793595] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:16.419 Malloc0 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:16.419 [2024-05-16 09:36:09.852744] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:16.419 [2024-05-16 09:36:09.852960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:23:16.419 test case1: single bdev can't be used in multiple subsystems 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:16.419 [2024-05-16 09:36:09.888890] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:23:16.419 [2024-05-16 09:36:09.888907] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:23:16.419 [2024-05-16 09:36:09.888915] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:16.419 request: 00:23:16.419 { 00:23:16.419 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:23:16.419 "namespace": { 00:23:16.419 "bdev_name": "Malloc0", 00:23:16.419 "no_auto_visible": false 00:23:16.419 }, 00:23:16.419 "method": "nvmf_subsystem_add_ns", 00:23:16.419 "req_id": 1 00:23:16.419 } 00:23:16.419 Got JSON-RPC error response 00:23:16.419 response: 00:23:16.419 { 00:23:16.419 "code": -32602, 00:23:16.419 "message": "Invalid parameters" 00:23:16.419 } 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:23:16.419 Adding namespace failed - expected result. 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:23:16.419 test case2: host connect to nvmf target in multiple paths 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:16.419 [2024-05-16 09:36:09.901028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.419 09:36:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:18.332 09:36:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:23:19.716 09:36:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:23:19.716 09:36:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:23:19.716 09:36:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:23:19.716 09:36:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:23:19.716 09:36:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:23:21.643 09:36:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:23:21.643 09:36:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:23:21.643 09:36:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:23:21.643 09:36:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:23:21.643 09:36:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:23:21.643 09:36:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:23:21.643 09:36:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:23:21.643 [global] 00:23:21.643 thread=1 00:23:21.643 invalidate=1 00:23:21.643 rw=write 00:23:21.643 time_based=1 00:23:21.643 runtime=1 00:23:21.643 ioengine=libaio 00:23:21.643 direct=1 00:23:21.643 bs=4096 00:23:21.643 iodepth=1 00:23:21.643 norandommap=0 00:23:21.643 numjobs=1 00:23:21.643 00:23:21.643 verify_dump=1 00:23:21.643 verify_backlog=512 00:23:21.643 verify_state_save=0 00:23:21.643 do_verify=1 00:23:21.643 verify=crc32c-intel 00:23:21.643 [job0] 00:23:21.643 filename=/dev/nvme0n1 00:23:21.643 Could not set queue depth (nvme0n1) 00:23:22.214 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:22.214 fio-3.35 00:23:22.214 Starting 1 thread 00:23:23.158 00:23:23.158 job0: (groupid=0, jobs=1): err= 0: pid=325166: Thu May 16 09:36:16 2024 00:23:23.158 read: IOPS=368, BW=1475KiB/s (1510kB/s)(1516KiB/1028msec) 00:23:23.158 slat (nsec): min=24053, max=57097, avg=25083.46, stdev=3308.43 00:23:23.158 clat (usec): min=910, max=42175, avg=1869.24, stdev=5492.47 00:23:23.158 lat (usec): min=935, max=42199, avg=1894.33, stdev=5492.45 00:23:23.158 clat percentiles (usec): 00:23:23.158 | 1.00th=[ 930], 5.00th=[ 996], 10.00th=[ 1029], 20.00th=[ 1074], 00:23:23.158 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:23:23.158 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1221], 00:23:23.158 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:23.158 | 99.99th=[42206] 00:23:23.158 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:23:23.158 slat (nsec): min=8711, max=51778, avg=26515.38, stdev=9398.63 00:23:23.158 clat (usec): min=301, max=833, avg=565.59, stdev=90.61 00:23:23.158 lat (usec): min=311, max=865, avg=592.11, stdev=95.79 00:23:23.158 clat percentiles (usec): 00:23:23.158 | 1.00th=[ 338], 5.00th=[ 396], 10.00th=[ 441], 20.00th=[ 486], 00:23:23.158 | 30.00th=[ 529], 40.00th=[ 553], 50.00th=[ 570], 60.00th=[ 594], 00:23:23.158 | 70.00th=[ 627], 80.00th=[ 652], 90.00th=[ 668], 95.00th=[ 693], 00:23:23.158 | 99.00th=[ 734], 99.50th=[ 766], 99.90th=[ 832], 99.95th=[ 832], 00:23:23.158 | 99.99th=[ 832] 00:23:23.158 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:23:23.158 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:23.158 lat (usec) : 500=13.58%, 750=43.55%, 1000=2.81% 00:23:23.158 lat (msec) : 2=39.28%, 50=0.79% 00:23:23.158 cpu : usr=2.14%, sys=2.63%, ctx=891, majf=0, minf=1 00:23:23.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:23.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.159 issued rwts: total=379,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:23.159 00:23:23.159 Run status group 0 (all jobs): 00:23:23.159 READ: bw=1475KiB/s (1510kB/s), 1475KiB/s-1475KiB/s (1510kB/s-1510kB/s), io=1516KiB (1552kB), run=1028-1028msec 00:23:23.159 WRITE: bw=1992KiB/s (2040kB/s), 1992KiB/s-1992KiB/s (2040kB/s-2040kB/s), io=2048KiB (2097kB), run=1028-1028msec 00:23:23.159 00:23:23.159 Disk stats (read/write): 00:23:23.159 nvme0n1: ios=387/512, merge=0/0, ticks=613/231, in_queue=844, util=93.89% 00:23:23.159 09:36:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:23.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:23:23.420 09:36:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:23.420 09:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:23:23.420 09:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:23:23.420 09:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:23.420 09:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:23:23.420 09:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:23.420 09:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:23:23.420 09:36:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:23.420 09:36:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:23:23.420 09:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:23.420 09:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:23:23.420 09:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:23.420 09:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:23:23.420 09:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:23.420 09:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:23.420 rmmod nvme_tcp 00:23:23.420 rmmod nvme_fabrics 00:23:23.420 rmmod nvme_keyring 00:23:23.681 09:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:23.681 09:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:23:23.681 09:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:23:23.681 09:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 323621 ']' 00:23:23.681 09:36:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 323621 00:23:23.681 09:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 323621 ']' 00:23:23.681 09:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 323621 00:23:23.681 09:36:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:23:23.681 09:36:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:23.681 09:36:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 323621 00:23:23.681 09:36:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:23.681 09:36:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:23.681 09:36:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 323621' 00:23:23.681 killing process with pid 323621 00:23:23.681 09:36:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 323621 00:23:23.681 [2024-05-16 09:36:17.051919] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:23.681 09:36:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 323621 00:23:23.682 09:36:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:23.682 09:36:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:23.682 09:36:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:23.682 09:36:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:23.682 09:36:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:23.682 09:36:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.682 09:36:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:23.682 09:36:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.234 09:36:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:26.234 00:23:26.234 real 0m17.693s 00:23:26.234 user 0m48.639s 00:23:26.234 sys 0m6.228s 00:23:26.234 09:36:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:26.234 09:36:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:23:26.234 ************************************ 00:23:26.234 END TEST nvmf_nmic 00:23:26.234 ************************************ 00:23:26.234 09:36:19 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:23:26.234 09:36:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:26.234 09:36:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:26.234 09:36:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:26.234 ************************************ 00:23:26.234 START TEST nvmf_fio_target 00:23:26.234 ************************************ 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:23:26.234 * Looking for test storage... 00:23:26.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:26.234 09:36:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:32.825 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:32.825 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:32.825 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:32.825 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.825 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.087 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.087 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.087 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:33.087 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.087 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.087 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.087 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:33.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:23:33.087 00:23:33.087 --- 10.0.0.2 ping statistics --- 00:23:33.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.087 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:23:33.087 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:23:33.087 00:23:33.087 --- 10.0.0.1 ping statistics --- 00:23:33.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.087 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:23:33.087 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.087 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:23:33.087 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:33.087 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.087 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:33.087 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:33.087 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.087 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:33.087 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:33.349 09:36:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:23:33.349 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:33.349 09:36:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:33.349 09:36:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.349 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=329524 00:23:33.349 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 329524 00:23:33.349 09:36:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:33.349 09:36:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 329524 ']' 00:23:33.349 09:36:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.349 09:36:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:33.349 09:36:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.350 09:36:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:33.350 09:36:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.350 [2024-05-16 09:36:26.724859] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:23:33.350 [2024-05-16 09:36:26.724922] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.350 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.350 [2024-05-16 09:36:26.795592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:33.350 [2024-05-16 09:36:26.871004] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.350 [2024-05-16 09:36:26.871043] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.350 [2024-05-16 09:36:26.871057] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.350 [2024-05-16 09:36:26.871064] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.350 [2024-05-16 09:36:26.871070] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.350 [2024-05-16 09:36:26.871136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.350 [2024-05-16 09:36:26.871244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.350 [2024-05-16 09:36:26.871398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.350 [2024-05-16 09:36:26.871400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:34.295 09:36:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:34.295 09:36:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:23:34.295 09:36:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:34.295 09:36:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:34.295 09:36:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.295 09:36:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.295 09:36:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:34.295 [2024-05-16 09:36:27.691018] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.295 09:36:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:34.556 09:36:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:23:34.556 09:36:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:34.556 09:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:23:34.556 09:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:34.818 09:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:23:34.818 09:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:35.079 09:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:23:35.079 09:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:23:35.079 09:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:35.341 09:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:23:35.341 09:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:35.602 09:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:23:35.602 09:36:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:35.602 09:36:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:23:35.602 09:36:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:23:35.864 09:36:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:36.126 09:36:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:23:36.126 09:36:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:36.126 09:36:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:23:36.126 09:36:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:36.387 09:36:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:36.648 [2024-05-16 09:36:29.951964] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:36.648 [2024-05-16 09:36:29.952226] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.648 09:36:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:23:36.648 09:36:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:23:36.910 09:36:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:38.827 09:36:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:23:38.827 09:36:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:23:38.827 09:36:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:23:38.827 09:36:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:23:38.827 09:36:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:23:38.827 09:36:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:23:40.744 09:36:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:23:40.744 09:36:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:23:40.744 09:36:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:23:40.744 09:36:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:23:40.744 09:36:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:23:40.744 09:36:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:23:40.744 09:36:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:23:40.744 [global] 00:23:40.744 thread=1 00:23:40.744 invalidate=1 00:23:40.744 rw=write 00:23:40.744 time_based=1 00:23:40.744 runtime=1 00:23:40.744 ioengine=libaio 00:23:40.744 direct=1 00:23:40.744 bs=4096 00:23:40.744 iodepth=1 00:23:40.744 norandommap=0 00:23:40.744 numjobs=1 00:23:40.744 00:23:40.744 verify_dump=1 00:23:40.744 verify_backlog=512 00:23:40.744 verify_state_save=0 00:23:40.744 do_verify=1 00:23:40.744 verify=crc32c-intel 00:23:40.744 [job0] 00:23:40.744 filename=/dev/nvme0n1 00:23:40.744 [job1] 00:23:40.744 filename=/dev/nvme0n2 00:23:40.744 [job2] 00:23:40.744 filename=/dev/nvme0n3 00:23:40.744 [job3] 00:23:40.744 filename=/dev/nvme0n4 00:23:40.744 Could not set queue depth (nvme0n1) 00:23:40.744 Could not set queue depth (nvme0n2) 00:23:40.744 Could not set queue depth (nvme0n3) 00:23:40.744 Could not set queue depth (nvme0n4) 00:23:41.003 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:41.003 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:41.003 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:41.003 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:41.003 fio-3.35 00:23:41.003 Starting 4 threads 00:23:42.418 00:23:42.418 job0: (groupid=0, jobs=1): err= 0: pid=331414: Thu May 16 09:36:35 2024 00:23:42.418 read: IOPS=17, BW=70.7KiB/s (72.4kB/s)(72.0KiB/1018msec) 00:23:42.418 slat (nsec): min=23368, max=24178, avg=23648.44, stdev=214.68 00:23:42.418 clat (usec): min=1217, max=42093, avg=37243.27, stdev=13091.03 00:23:42.418 lat (usec): min=1241, max=42117, avg=37266.91, stdev=13091.05 00:23:42.418 clat percentiles (usec): 00:23:42.418 | 1.00th=[ 1221], 5.00th=[ 1221], 10.00th=[ 1336], 20.00th=[41157], 00:23:42.418 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:23:42.418 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:42.418 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:42.418 | 99.99th=[42206] 00:23:42.418 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:23:42.418 slat (nsec): min=9309, max=48609, avg=28493.46, stdev=7254.68 00:23:42.418 clat (usec): min=276, max=978, avg=642.05, stdev=130.15 00:23:42.418 lat (usec): min=286, max=1008, avg=670.54, stdev=132.24 00:23:42.418 clat percentiles (usec): 00:23:42.418 | 1.00th=[ 347], 5.00th=[ 424], 10.00th=[ 469], 20.00th=[ 519], 00:23:42.418 | 30.00th=[ 578], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 685], 00:23:42.418 | 70.00th=[ 717], 80.00th=[ 750], 90.00th=[ 799], 95.00th=[ 848], 00:23:42.418 | 99.00th=[ 938], 99.50th=[ 947], 99.90th=[ 979], 99.95th=[ 979], 00:23:42.418 | 99.99th=[ 979] 00:23:42.418 bw ( KiB/s): min= 4096, max= 4096, per=48.03%, avg=4096.00, stdev= 0.00, samples=1 00:23:42.418 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:42.418 lat (usec) : 500=15.09%, 750=62.08%, 1000=19.43% 00:23:42.418 lat (msec) : 2=0.38%, 50=3.02% 00:23:42.418 cpu : usr=0.79%, sys=1.38%, ctx=530, majf=0, minf=1 00:23:42.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:42.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.418 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:42.418 job1: (groupid=0, jobs=1): err= 0: pid=331421: Thu May 16 09:36:35 2024 00:23:42.418 read: IOPS=166, BW=667KiB/s (683kB/s)(668KiB/1001msec) 00:23:42.418 slat (nsec): min=6495, max=44573, avg=25325.78, stdev=4512.97 00:23:42.418 clat (usec): min=395, max=42113, avg=3769.19, stdev=10538.67 00:23:42.418 lat (usec): min=421, max=42139, avg=3794.52, stdev=10538.78 00:23:42.418 clat percentiles (usec): 00:23:42.418 | 1.00th=[ 502], 5.00th=[ 603], 10.00th=[ 668], 20.00th=[ 701], 00:23:42.418 | 30.00th=[ 750], 40.00th=[ 848], 50.00th=[ 889], 60.00th=[ 906], 00:23:42.418 | 70.00th=[ 938], 80.00th=[ 979], 90.00th=[ 1139], 95.00th=[41157], 00:23:42.418 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:42.418 | 99.99th=[42206] 00:23:42.419 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:23:42.419 slat (usec): min=8, max=2009, avg=35.72, stdev=87.77 00:23:42.419 clat (usec): min=126, max=976, avg=671.45, stdev=155.95 00:23:42.419 lat (usec): min=137, max=2870, avg=707.17, stdev=184.72 00:23:42.419 clat percentiles (usec): 00:23:42.419 | 1.00th=[ 265], 5.00th=[ 388], 10.00th=[ 465], 20.00th=[ 545], 00:23:42.419 | 30.00th=[ 594], 40.00th=[ 644], 50.00th=[ 685], 60.00th=[ 725], 00:23:42.419 | 70.00th=[ 775], 80.00th=[ 807], 90.00th=[ 857], 95.00th=[ 898], 00:23:42.419 | 99.00th=[ 947], 99.50th=[ 963], 99.90th=[ 979], 99.95th=[ 979], 00:23:42.419 | 99.99th=[ 979] 00:23:42.419 bw ( KiB/s): min= 4096, max= 4096, per=48.03%, avg=4096.00, stdev= 0.00, samples=1 00:23:42.419 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:42.419 lat (usec) : 250=0.59%, 500=9.72%, 750=45.21%, 1000=39.76% 00:23:42.419 lat (msec) : 2=2.95%, 50=1.77% 00:23:42.419 cpu : usr=1.30%, sys=2.70%, ctx=682, majf=0, minf=1 00:23:42.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:42.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.419 issued rwts: total=167,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:42.419 job2: (groupid=0, jobs=1): err= 0: pid=331422: Thu May 16 09:36:35 2024 00:23:42.419 read: IOPS=15, BW=61.6KiB/s (63.1kB/s)(64.0KiB/1039msec) 00:23:42.419 slat (nsec): min=21071, max=25480, avg=24745.87, stdev=992.67 00:23:42.419 clat (usec): min=40912, max=41953, avg=41237.96, stdev=384.56 00:23:42.419 lat (usec): min=40937, max=41978, avg=41262.71, stdev=384.42 00:23:42.419 clat percentiles (usec): 00:23:42.419 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:23:42.419 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:23:42.419 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:23:42.419 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:42.419 | 99.99th=[42206] 00:23:42.419 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:23:42.419 slat (nsec): min=9075, max=51122, avg=30688.46, stdev=7060.41 00:23:42.419 clat (usec): min=136, max=1029, avg=701.13, stdev=153.91 00:23:42.419 lat (usec): min=146, max=1062, avg=731.82, stdev=155.43 00:23:42.419 clat percentiles (usec): 00:23:42.419 | 1.00th=[ 255], 5.00th=[ 404], 10.00th=[ 494], 20.00th=[ 586], 00:23:42.419 | 30.00th=[ 644], 40.00th=[ 685], 50.00th=[ 725], 60.00th=[ 758], 00:23:42.419 | 70.00th=[ 791], 80.00th=[ 832], 90.00th=[ 881], 95.00th=[ 906], 00:23:42.419 | 99.00th=[ 971], 99.50th=[ 1012], 99.90th=[ 1029], 99.95th=[ 1029], 00:23:42.419 | 99.99th=[ 1029] 00:23:42.419 bw ( KiB/s): min= 4096, max= 4096, per=48.03%, avg=4096.00, stdev= 0.00, samples=1 00:23:42.419 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:42.419 lat (usec) : 250=0.76%, 500=9.85%, 750=46.02%, 1000=39.77% 00:23:42.419 lat (msec) : 2=0.57%, 50=3.03% 00:23:42.419 cpu : usr=0.96%, sys=2.02%, ctx=528, majf=0, minf=1 00:23:42.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:42.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.419 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:42.419 job3: (groupid=0, jobs=1): err= 0: pid=331423: Thu May 16 09:36:35 2024 00:23:42.419 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:23:42.419 slat (nsec): min=6430, max=59840, avg=24293.79, stdev=5320.83 00:23:42.419 clat (usec): min=400, max=41092, avg=1112.20, stdev=3069.20 00:23:42.419 lat (usec): min=426, max=41117, avg=1136.50, stdev=3069.21 00:23:42.419 clat percentiles (usec): 00:23:42.419 | 1.00th=[ 635], 5.00th=[ 734], 10.00th=[ 766], 20.00th=[ 807], 00:23:42.419 | 30.00th=[ 848], 40.00th=[ 865], 50.00th=[ 873], 60.00th=[ 898], 00:23:42.419 | 70.00th=[ 922], 80.00th=[ 955], 90.00th=[ 988], 95.00th=[ 1020], 00:23:42.419 | 99.00th=[ 1057], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:23:42.419 | 99.99th=[41157] 00:23:42.419 write: IOPS=678, BW=2713KiB/s (2778kB/s)(2716KiB/1001msec); 0 zone resets 00:23:42.419 slat (nsec): min=8715, max=51045, avg=28289.53, stdev=9232.04 00:23:42.419 clat (usec): min=201, max=857, avg=574.90, stdev=114.83 00:23:42.419 lat (usec): min=210, max=889, avg=603.19, stdev=119.06 00:23:42.419 clat percentiles (usec): 00:23:42.419 | 1.00th=[ 314], 5.00th=[ 367], 10.00th=[ 408], 20.00th=[ 469], 00:23:42.419 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 611], 00:23:42.419 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 717], 95.00th=[ 742], 00:23:42.419 | 99.00th=[ 799], 99.50th=[ 816], 99.90th=[ 857], 99.95th=[ 857], 00:23:42.419 | 99.99th=[ 857] 00:23:42.419 bw ( KiB/s): min= 4096, max= 4096, per=48.03%, avg=4096.00, stdev= 0.00, samples=1 00:23:42.419 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:42.419 lat (usec) : 250=0.25%, 500=14.61%, 750=43.24%, 1000=38.46% 00:23:42.419 lat (msec) : 2=3.19%, 50=0.25% 00:23:42.419 cpu : usr=2.00%, sys=4.30%, ctx=1191, majf=0, minf=1 00:23:42.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:42.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.419 issued rwts: total=512,679,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:42.419 00:23:42.419 Run status group 0 (all jobs): 00:23:42.419 READ: bw=2745KiB/s (2811kB/s), 61.6KiB/s-2046KiB/s (63.1kB/s-2095kB/s), io=2852KiB (2920kB), run=1001-1039msec 00:23:42.419 WRITE: bw=8527KiB/s (8732kB/s), 1971KiB/s-2713KiB/s (2018kB/s-2778kB/s), io=8860KiB (9073kB), run=1001-1039msec 00:23:42.419 00:23:42.419 Disk stats (read/write): 00:23:42.419 nvme0n1: ios=62/512, merge=0/0, ticks=477/311, in_queue=788, util=81.96% 00:23:42.419 nvme0n2: ios=83/512, merge=0/0, ticks=1290/270, in_queue=1560, util=90.67% 00:23:42.419 nvme0n3: ios=71/512, merge=0/0, ticks=862/293, in_queue=1155, util=96.53% 00:23:42.419 nvme0n4: ios=534/512, merge=0/0, ticks=986/230, in_queue=1216, util=99.32% 00:23:42.419 09:36:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:23:42.419 [global] 00:23:42.419 thread=1 00:23:42.419 invalidate=1 00:23:42.419 rw=randwrite 00:23:42.419 time_based=1 00:23:42.419 runtime=1 00:23:42.419 ioengine=libaio 00:23:42.419 direct=1 00:23:42.419 bs=4096 00:23:42.419 iodepth=1 00:23:42.419 norandommap=0 00:23:42.419 numjobs=1 00:23:42.419 00:23:42.419 verify_dump=1 00:23:42.419 verify_backlog=512 00:23:42.419 verify_state_save=0 00:23:42.419 do_verify=1 00:23:42.419 verify=crc32c-intel 00:23:42.419 [job0] 00:23:42.419 filename=/dev/nvme0n1 00:23:42.419 [job1] 00:23:42.419 filename=/dev/nvme0n2 00:23:42.419 [job2] 00:23:42.419 filename=/dev/nvme0n3 00:23:42.419 [job3] 00:23:42.419 filename=/dev/nvme0n4 00:23:42.419 Could not set queue depth (nvme0n1) 00:23:42.419 Could not set queue depth (nvme0n2) 00:23:42.419 Could not set queue depth (nvme0n3) 00:23:42.419 Could not set queue depth (nvme0n4) 00:23:42.685 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:42.685 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:42.685 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:42.685 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:42.685 fio-3.35 00:23:42.685 Starting 4 threads 00:23:44.100 00:23:44.100 job0: (groupid=0, jobs=1): err= 0: pid=331943: Thu May 16 09:36:37 2024 00:23:44.100 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:23:44.100 slat (nsec): min=6600, max=54466, avg=23507.43, stdev=6852.23 00:23:44.100 clat (usec): min=284, max=42075, avg=1092.51, stdev=3655.75 00:23:44.100 lat (usec): min=290, max=42101, avg=1116.02, stdev=3655.65 00:23:44.100 clat percentiles (usec): 00:23:44.100 | 1.00th=[ 416], 5.00th=[ 519], 10.00th=[ 570], 20.00th=[ 627], 00:23:44.100 | 30.00th=[ 685], 40.00th=[ 717], 50.00th=[ 766], 60.00th=[ 807], 00:23:44.100 | 70.00th=[ 832], 80.00th=[ 857], 90.00th=[ 898], 95.00th=[ 979], 00:23:44.100 | 99.00th=[ 1303], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:23:44.100 | 99.99th=[42206] 00:23:44.100 write: IOPS=766, BW=3065KiB/s (3138kB/s)(3068KiB/1001msec); 0 zone resets 00:23:44.100 slat (nsec): min=8290, max=60986, avg=28432.21, stdev=9080.62 00:23:44.100 clat (usec): min=183, max=905, avg=517.84, stdev=123.29 00:23:44.100 lat (usec): min=213, max=938, avg=546.28, stdev=125.14 00:23:44.100 clat percentiles (usec): 00:23:44.100 | 1.00th=[ 235], 5.00th=[ 334], 10.00th=[ 359], 20.00th=[ 404], 00:23:44.100 | 30.00th=[ 453], 40.00th=[ 478], 50.00th=[ 519], 60.00th=[ 553], 00:23:44.100 | 70.00th=[ 586], 80.00th=[ 627], 90.00th=[ 685], 95.00th=[ 725], 00:23:44.100 | 99.00th=[ 807], 99.50th=[ 807], 99.90th=[ 906], 99.95th=[ 906], 00:23:44.100 | 99.99th=[ 906] 00:23:44.100 bw ( KiB/s): min= 4096, max= 4096, per=42.45%, avg=4096.00, stdev= 0.00, samples=1 00:23:44.100 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:44.100 lat (usec) : 250=0.86%, 500=27.44%, 750=48.63%, 1000=21.27% 00:23:44.100 lat (msec) : 2=1.41%, 20=0.08%, 50=0.31% 00:23:44.100 cpu : usr=1.70%, sys=4.20%, ctx=1282, majf=0, minf=1 00:23:44.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:44.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.100 issued rwts: total=512,767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:44.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:44.100 job1: (groupid=0, jobs=1): err= 0: pid=331945: Thu May 16 09:36:37 2024 00:23:44.100 read: IOPS=193, BW=772KiB/s (791kB/s)(804KiB/1041msec) 00:23:44.100 slat (nsec): min=7624, max=85771, avg=25475.65, stdev=6045.55 00:23:44.100 clat (usec): min=814, max=41961, avg=3538.95, stdev=9664.74 00:23:44.100 lat (usec): min=837, max=41987, avg=3564.43, stdev=9664.84 00:23:44.100 clat percentiles (usec): 00:23:44.100 | 1.00th=[ 848], 5.00th=[ 881], 10.00th=[ 922], 20.00th=[ 963], 00:23:44.100 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1045], 00:23:44.100 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1205], 95.00th=[41157], 00:23:44.100 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:23:44.100 | 99.99th=[42206] 00:23:44.100 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:23:44.100 slat (nsec): min=9297, max=57549, avg=29115.94, stdev=9801.44 00:23:44.100 clat (usec): min=223, max=904, avg=592.60, stdev=122.94 00:23:44.100 lat (usec): min=240, max=942, avg=621.71, stdev=126.03 00:23:44.100 clat percentiles (usec): 00:23:44.100 | 1.00th=[ 306], 5.00th=[ 383], 10.00th=[ 429], 20.00th=[ 486], 00:23:44.100 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 627], 00:23:44.100 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 750], 95.00th=[ 791], 00:23:44.100 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 906], 99.95th=[ 906], 00:23:44.100 | 99.99th=[ 906] 00:23:44.100 bw ( KiB/s): min= 4096, max= 4096, per=42.45%, avg=4096.00, stdev= 0.00, samples=1 00:23:44.100 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:44.100 lat (usec) : 250=0.28%, 500=16.55%, 750=47.97%, 1000=18.09% 00:23:44.100 lat (msec) : 2=15.29%, 50=1.82% 00:23:44.100 cpu : usr=1.15%, sys=1.92%, ctx=715, majf=0, minf=1 00:23:44.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:44.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.100 issued rwts: total=201,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:44.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:44.100 job2: (groupid=0, jobs=1): err= 0: pid=331946: Thu May 16 09:36:37 2024 00:23:44.100 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:23:44.100 slat (nsec): min=7281, max=59952, avg=26107.96, stdev=4021.11 00:23:44.100 clat (usec): min=586, max=41552, avg=1054.66, stdev=1795.46 00:23:44.100 lat (usec): min=613, max=41563, avg=1080.77, stdev=1794.81 00:23:44.100 clat percentiles (usec): 00:23:44.100 | 1.00th=[ 742], 5.00th=[ 832], 10.00th=[ 889], 20.00th=[ 922], 00:23:44.100 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 988], 00:23:44.100 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1074], 95.00th=[ 1123], 00:23:44.100 | 99.00th=[ 1287], 99.50th=[ 1369], 99.90th=[41681], 99.95th=[41681], 00:23:44.100 | 99.99th=[41681] 00:23:44.100 write: IOPS=719, BW=2877KiB/s (2946kB/s)(2880KiB/1001msec); 0 zone resets 00:23:44.100 slat (nsec): min=9046, max=77852, avg=30016.18, stdev=10210.20 00:23:44.100 clat (usec): min=212, max=1259, avg=576.76, stdev=130.42 00:23:44.100 lat (usec): min=243, max=1293, avg=606.78, stdev=133.87 00:23:44.100 clat percentiles (usec): 00:23:44.100 | 1.00th=[ 265], 5.00th=[ 355], 10.00th=[ 420], 20.00th=[ 474], 00:23:44.100 | 30.00th=[ 515], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 611], 00:23:44.100 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 734], 95.00th=[ 775], 00:23:44.100 | 99.00th=[ 889], 99.50th=[ 930], 99.90th=[ 1254], 99.95th=[ 1254], 00:23:44.100 | 99.99th=[ 1254] 00:23:44.100 bw ( KiB/s): min= 4096, max= 4096, per=42.45%, avg=4096.00, stdev= 0.00, samples=1 00:23:44.100 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:44.100 lat (usec) : 250=0.57%, 500=14.85%, 750=39.37%, 1000=32.06% 00:23:44.100 lat (msec) : 2=13.07%, 50=0.08% 00:23:44.100 cpu : usr=3.20%, sys=4.00%, ctx=1233, majf=0, minf=1 00:23:44.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:44.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.100 issued rwts: total=512,720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:44.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:44.100 job3: (groupid=0, jobs=1): err= 0: pid=331947: Thu May 16 09:36:37 2024 00:23:44.100 read: IOPS=16, BW=67.4KiB/s (69.0kB/s)(68.0KiB/1009msec) 00:23:44.100 slat (nsec): min=8184, max=25274, avg=24028.65, stdev=4084.95 00:23:44.100 clat (usec): min=937, max=42064, avg=39542.61, stdev=9948.35 00:23:44.100 lat (usec): min=963, max=42089, avg=39566.64, stdev=9948.05 00:23:44.100 clat percentiles (usec): 00:23:44.100 | 1.00th=[ 938], 5.00th=[ 938], 10.00th=[41681], 20.00th=[41681], 00:23:44.100 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:23:44.100 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:44.100 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:44.100 | 99.99th=[42206] 00:23:44.100 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:23:44.100 slat (nsec): min=9305, max=56847, avg=28450.47, stdev=8900.52 00:23:44.100 clat (usec): min=224, max=1048, avg=619.55, stdev=112.59 00:23:44.100 lat (usec): min=235, max=1079, avg=648.00, stdev=116.81 00:23:44.100 clat percentiles (usec): 00:23:44.100 | 1.00th=[ 363], 5.00th=[ 424], 10.00th=[ 465], 20.00th=[ 537], 00:23:44.100 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:23:44.100 | 70.00th=[ 693], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 775], 00:23:44.100 | 99.00th=[ 857], 99.50th=[ 947], 99.90th=[ 1045], 99.95th=[ 1045], 00:23:44.100 | 99.99th=[ 1045] 00:23:44.100 bw ( KiB/s): min= 4096, max= 4096, per=42.45%, avg=4096.00, stdev= 0.00, samples=1 00:23:44.100 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:23:44.100 lat (usec) : 250=0.19%, 500=16.64%, 750=71.46%, 1000=8.51% 00:23:44.100 lat (msec) : 2=0.19%, 50=3.02% 00:23:44.100 cpu : usr=0.79%, sys=1.49%, ctx=530, majf=0, minf=1 00:23:44.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:44.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.100 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:44.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:44.100 00:23:44.100 Run status group 0 (all jobs): 00:23:44.100 READ: bw=4772KiB/s (4887kB/s), 67.4KiB/s-2046KiB/s (69.0kB/s-2095kB/s), io=4968KiB (5087kB), run=1001-1041msec 00:23:44.100 WRITE: bw=9648KiB/s (9880kB/s), 1967KiB/s-3065KiB/s (2015kB/s-3138kB/s), io=9.81MiB (10.3MB), run=1001-1041msec 00:23:44.100 00:23:44.100 Disk stats (read/write): 00:23:44.100 nvme0n1: ios=526/512, merge=0/0, ticks=747/224, in_queue=971, util=84.97% 00:23:44.100 nvme0n2: ios=199/512, merge=0/0, ticks=1344/276, in_queue=1620, util=88.39% 00:23:44.100 nvme0n3: ios=537/512, merge=0/0, ticks=677/231, in_queue=908, util=92.95% 00:23:44.100 nvme0n4: ios=66/512, merge=0/0, ticks=589/304, in_queue=893, util=97.23% 00:23:44.100 09:36:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:23:44.100 [global] 00:23:44.100 thread=1 00:23:44.100 invalidate=1 00:23:44.100 rw=write 00:23:44.100 time_based=1 00:23:44.100 runtime=1 00:23:44.100 ioengine=libaio 00:23:44.100 direct=1 00:23:44.100 bs=4096 00:23:44.100 iodepth=128 00:23:44.100 norandommap=0 00:23:44.100 numjobs=1 00:23:44.100 00:23:44.100 verify_dump=1 00:23:44.100 verify_backlog=512 00:23:44.100 verify_state_save=0 00:23:44.100 do_verify=1 00:23:44.100 verify=crc32c-intel 00:23:44.100 [job0] 00:23:44.100 filename=/dev/nvme0n1 00:23:44.100 [job1] 00:23:44.100 filename=/dev/nvme0n2 00:23:44.100 [job2] 00:23:44.100 filename=/dev/nvme0n3 00:23:44.100 [job3] 00:23:44.101 filename=/dev/nvme0n4 00:23:44.101 Could not set queue depth (nvme0n1) 00:23:44.101 Could not set queue depth (nvme0n2) 00:23:44.101 Could not set queue depth (nvme0n3) 00:23:44.101 Could not set queue depth (nvme0n4) 00:23:44.361 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:44.361 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:44.361 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:44.361 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:44.361 fio-3.35 00:23:44.361 Starting 4 threads 00:23:45.754 00:23:45.754 job0: (groupid=0, jobs=1): err= 0: pid=332465: Thu May 16 09:36:38 2024 00:23:45.754 read: IOPS=5049, BW=19.7MiB/s (20.7MB/s)(20.0MiB/1014msec) 00:23:45.754 slat (nsec): min=850, max=19659k, avg=107121.37, stdev=858107.84 00:23:45.754 clat (usec): min=4007, max=87299, avg=13064.97, stdev=9456.78 00:23:45.754 lat (usec): min=4012, max=87306, avg=13172.09, stdev=9541.31 00:23:45.754 clat percentiles (usec): 00:23:45.754 | 1.00th=[ 6194], 5.00th=[ 7635], 10.00th=[ 8160], 20.00th=[ 8848], 00:23:45.754 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10945], 00:23:45.754 | 70.00th=[12387], 80.00th=[15008], 90.00th=[19792], 95.00th=[24511], 00:23:45.754 | 99.00th=[70779], 99.50th=[82314], 99.90th=[86508], 99.95th=[87557], 00:23:45.754 | 99.99th=[87557] 00:23:45.754 write: IOPS=5159, BW=20.2MiB/s (21.1MB/s)(20.4MiB/1014msec); 0 zone resets 00:23:45.754 slat (nsec): min=1554, max=10009k, avg=81379.91, stdev=507626.44 00:23:45.754 clat (usec): min=1156, max=87290, avg=11798.69, stdev=11308.62 00:23:45.754 lat (usec): min=1165, max=87298, avg=11880.07, stdev=11376.80 00:23:45.754 clat percentiles (usec): 00:23:45.754 | 1.00th=[ 3195], 5.00th=[ 4883], 10.00th=[ 6194], 20.00th=[ 8029], 00:23:45.754 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:23:45.754 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[16712], 95.00th=[35914], 00:23:45.754 | 99.00th=[68682], 99.50th=[71828], 99.90th=[76022], 99.95th=[76022], 00:23:45.754 | 99.99th=[87557] 00:23:45.754 bw ( KiB/s): min=12288, max=28672, per=21.61%, avg=20480.00, stdev=11585.24, samples=2 00:23:45.754 iops : min= 3072, max= 7168, avg=5120.00, stdev=2896.31, samples=2 00:23:45.754 lat (msec) : 2=0.08%, 4=1.09%, 10=66.22%, 20=23.99%, 50=6.09% 00:23:45.754 lat (msec) : 100=2.54% 00:23:45.754 cpu : usr=4.15%, sys=4.05%, ctx=561, majf=0, minf=1 00:23:45.754 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:45.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:45.754 issued rwts: total=5120,5232,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:45.754 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:45.754 job1: (groupid=0, jobs=1): err= 0: pid=332467: Thu May 16 09:36:38 2024 00:23:45.754 read: IOPS=6699, BW=26.2MiB/s (27.4MB/s)(26.2MiB/1003msec) 00:23:45.754 slat (nsec): min=841, max=17544k, avg=68967.45, stdev=610637.50 00:23:45.754 clat (usec): min=1075, max=42247, avg=9761.44, stdev=4964.03 00:23:45.754 lat (usec): min=3037, max=42859, avg=9830.41, stdev=5020.69 00:23:45.754 clat percentiles (usec): 00:23:45.754 | 1.00th=[ 4555], 5.00th=[ 5407], 10.00th=[ 5997], 20.00th=[ 6980], 00:23:45.754 | 30.00th=[ 7177], 40.00th=[ 7439], 50.00th=[ 7767], 60.00th=[ 8094], 00:23:45.754 | 70.00th=[ 9372], 80.00th=[11469], 90.00th=[17957], 95.00th=[20579], 00:23:45.754 | 99.00th=[24773], 99.50th=[29754], 99.90th=[42206], 99.95th=[42206], 00:23:45.754 | 99.99th=[42206] 00:23:45.754 write: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec); 0 zone resets 00:23:45.754 slat (nsec): min=1514, max=15551k, avg=62952.80, stdev=483450.50 00:23:45.754 clat (usec): min=1280, max=33416, avg=8599.23, stdev=4552.58 00:23:45.754 lat (usec): min=1291, max=33418, avg=8662.18, stdev=4584.83 00:23:45.754 clat percentiles (usec): 00:23:45.754 | 1.00th=[ 3490], 5.00th=[ 4490], 10.00th=[ 5342], 20.00th=[ 6521], 00:23:45.754 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7242], 60.00th=[ 7439], 00:23:45.754 | 70.00th=[ 8291], 80.00th=[ 9503], 90.00th=[12780], 95.00th=[19792], 00:23:45.754 | 99.00th=[27132], 99.50th=[27657], 99.90th=[30802], 99.95th=[33424], 00:23:45.754 | 99.99th=[33424] 00:23:45.754 bw ( KiB/s): min=24576, max=32264, per=29.98%, avg=28420.00, stdev=5436.24, samples=2 00:23:45.754 iops : min= 6144, max= 8066, avg=7105.00, stdev=1359.06, samples=2 00:23:45.754 lat (msec) : 2=0.08%, 4=1.32%, 10=76.87%, 20=16.56%, 50=5.17% 00:23:45.754 cpu : usr=5.49%, sys=6.99%, ctx=412, majf=0, minf=1 00:23:45.754 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:23:45.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:45.754 issued rwts: total=6720,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:45.754 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:45.754 job2: (groupid=0, jobs=1): err= 0: pid=332471: Thu May 16 09:36:38 2024 00:23:45.754 read: IOPS=7211, BW=28.2MiB/s (29.5MB/s)(28.4MiB/1007msec) 00:23:45.754 slat (nsec): min=907, max=7752.7k, avg=68159.63, stdev=514445.82 00:23:45.754 clat (usec): min=2373, max=40794, avg=9312.82, stdev=3794.51 00:23:45.754 lat (usec): min=2376, max=40802, avg=9380.98, stdev=3811.32 00:23:45.754 clat percentiles (usec): 00:23:45.754 | 1.00th=[ 3851], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 7504], 00:23:45.754 | 30.00th=[ 7832], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8848], 00:23:45.754 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[12125], 95.00th=[13829], 00:23:45.754 | 99.00th=[33424], 99.50th=[33424], 99.90th=[33817], 99.95th=[33817], 00:23:45.754 | 99.99th=[40633] 00:23:45.754 write: IOPS=7626, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1007msec); 0 zone resets 00:23:45.754 slat (nsec): min=1637, max=32006k, avg=60754.23, stdev=522000.77 00:23:45.754 clat (usec): min=2138, max=16653, avg=7696.87, stdev=1851.87 00:23:45.754 lat (usec): min=2146, max=38933, avg=7757.62, stdev=1908.67 00:23:45.754 clat percentiles (usec): 00:23:45.754 | 1.00th=[ 2802], 5.00th=[ 4015], 10.00th=[ 5211], 20.00th=[ 6128], 00:23:45.754 | 30.00th=[ 7439], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8094], 00:23:45.755 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[10814], 00:23:45.755 | 99.00th=[12387], 99.50th=[13435], 99.90th=[15270], 99.95th=[15533], 00:23:45.755 | 99.99th=[16712] 00:23:45.755 bw ( KiB/s): min=28672, max=32504, per=32.27%, avg=30588.00, stdev=2709.63, samples=2 00:23:45.755 iops : min= 7168, max= 8126, avg=7647.00, stdev=677.41, samples=2 00:23:45.755 lat (msec) : 4=3.11%, 10=82.02%, 20=14.03%, 50=0.85% 00:23:45.755 cpu : usr=6.06%, sys=6.36%, ctx=676, majf=0, minf=1 00:23:45.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:45.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:45.755 issued rwts: total=7262,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:45.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:45.755 job3: (groupid=0, jobs=1): err= 0: pid=332472: Thu May 16 09:36:38 2024 00:23:45.755 read: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec) 00:23:45.755 slat (nsec): min=1909, max=15740k, avg=124519.50, stdev=914121.73 00:23:45.755 clat (usec): min=4111, max=77860, avg=15753.14, stdev=8510.15 00:23:45.755 lat (usec): min=4406, max=77867, avg=15877.66, stdev=8598.07 00:23:45.755 clat percentiles (usec): 00:23:45.755 | 1.00th=[ 5604], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[10159], 00:23:45.755 | 30.00th=[11731], 40.00th=[11994], 50.00th=[14091], 60.00th=[15139], 00:23:45.755 | 70.00th=[17695], 80.00th=[20317], 90.00th=[22676], 95.00th=[23200], 00:23:45.755 | 99.00th=[63701], 99.50th=[70779], 99.90th=[78119], 99.95th=[78119], 00:23:45.755 | 99.99th=[78119] 00:23:45.755 write: IOPS=3913, BW=15.3MiB/s (16.0MB/s)(15.5MiB/1015msec); 0 zone resets 00:23:45.755 slat (nsec): min=1655, max=18104k, avg=109112.52, stdev=845088.40 00:23:45.755 clat (usec): min=1191, max=100692, avg=18222.61, stdev=18021.70 00:23:45.755 lat (usec): min=1202, max=100699, avg=18331.72, stdev=18130.40 00:23:45.755 clat percentiles (msec): 00:23:45.755 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 9], 00:23:45.755 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 13], 00:23:45.755 | 70.00th=[ 17], 80.00th=[ 23], 90.00th=[ 41], 95.00th=[ 65], 00:23:45.755 | 99.00th=[ 90], 99.50th=[ 99], 99.90th=[ 101], 99.95th=[ 102], 00:23:45.755 | 99.99th=[ 102] 00:23:45.755 bw ( KiB/s): min=14712, max=16048, per=16.23%, avg=15380.00, stdev=944.69, samples=2 00:23:45.755 iops : min= 3678, max= 4012, avg=3845.00, stdev=236.17, samples=2 00:23:45.755 lat (msec) : 2=0.03%, 4=0.90%, 10=29.43%, 20=45.95%, 50=18.66% 00:23:45.755 lat (msec) : 100=4.84%, 250=0.19% 00:23:45.755 cpu : usr=3.55%, sys=3.75%, ctx=253, majf=0, minf=1 00:23:45.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:45.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:45.755 issued rwts: total=3584,3972,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:45.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:45.755 00:23:45.755 Run status group 0 (all jobs): 00:23:45.755 READ: bw=87.3MiB/s (91.5MB/s), 13.8MiB/s-28.2MiB/s (14.5MB/s-29.5MB/s), io=88.6MiB (92.9MB), run=1003-1015msec 00:23:45.755 WRITE: bw=92.6MiB/s (97.1MB/s), 15.3MiB/s-29.8MiB/s (16.0MB/s-31.2MB/s), io=94.0MiB (98.5MB), run=1003-1015msec 00:23:45.755 00:23:45.755 Disk stats (read/write): 00:23:45.755 nvme0n1: ios=4146/4599, merge=0/0, ticks=45030/52963, in_queue=97993, util=91.88% 00:23:45.755 nvme0n2: ios=5405/5632, merge=0/0, ticks=49639/42863, in_queue=92502, util=87.98% 00:23:45.755 nvme0n3: ios=6030/6144, merge=0/0, ticks=51790/46103, in_queue=97893, util=96.53% 00:23:45.755 nvme0n4: ios=3183/3599, merge=0/0, ticks=46879/52535, in_queue=99414, util=89.55% 00:23:45.755 09:36:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:23:45.755 [global] 00:23:45.755 thread=1 00:23:45.755 invalidate=1 00:23:45.755 rw=randwrite 00:23:45.755 time_based=1 00:23:45.755 runtime=1 00:23:45.755 ioengine=libaio 00:23:45.755 direct=1 00:23:45.755 bs=4096 00:23:45.755 iodepth=128 00:23:45.755 norandommap=0 00:23:45.755 numjobs=1 00:23:45.755 00:23:45.755 verify_dump=1 00:23:45.755 verify_backlog=512 00:23:45.755 verify_state_save=0 00:23:45.755 do_verify=1 00:23:45.755 verify=crc32c-intel 00:23:45.755 [job0] 00:23:45.755 filename=/dev/nvme0n1 00:23:45.755 [job1] 00:23:45.755 filename=/dev/nvme0n2 00:23:45.755 [job2] 00:23:45.755 filename=/dev/nvme0n3 00:23:45.755 [job3] 00:23:45.755 filename=/dev/nvme0n4 00:23:45.755 Could not set queue depth (nvme0n1) 00:23:45.755 Could not set queue depth (nvme0n2) 00:23:45.755 Could not set queue depth (nvme0n3) 00:23:45.755 Could not set queue depth (nvme0n4) 00:23:46.016 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:46.016 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:46.016 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:46.016 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:46.016 fio-3.35 00:23:46.016 Starting 4 threads 00:23:47.426 00:23:47.426 job0: (groupid=0, jobs=1): err= 0: pid=332974: Thu May 16 09:36:40 2024 00:23:47.426 read: IOPS=3692, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1005msec) 00:23:47.426 slat (nsec): min=1803, max=16458k, avg=145989.06, stdev=1072786.63 00:23:47.426 clat (usec): min=3107, max=48755, avg=19255.13, stdev=6363.81 00:23:47.426 lat (usec): min=6803, max=48780, avg=19401.12, stdev=6462.59 00:23:47.426 clat percentiles (usec): 00:23:47.426 | 1.00th=[ 6915], 5.00th=[12649], 10.00th=[13173], 20.00th=[13435], 00:23:47.426 | 30.00th=[15270], 40.00th=[16581], 50.00th=[18744], 60.00th=[19792], 00:23:47.426 | 70.00th=[20579], 80.00th=[23200], 90.00th=[26870], 95.00th=[32900], 00:23:47.426 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[46400], 00:23:47.426 | 99.99th=[48497] 00:23:47.426 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:23:47.426 slat (usec): min=3, max=17868, avg=106.39, stdev=820.44 00:23:47.426 clat (usec): min=1161, max=43911, avg=13662.24, stdev=4745.87 00:23:47.426 lat (usec): min=1171, max=43957, avg=13768.63, stdev=4818.42 00:23:47.426 clat percentiles (usec): 00:23:47.426 | 1.00th=[ 6652], 5.00th=[10814], 10.00th=[11207], 20.00th=[11469], 00:23:47.426 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[12518], 00:23:47.426 | 70.00th=[13829], 80.00th=[15533], 90.00th=[17171], 95.00th=[23987], 00:23:47.426 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:23:47.426 | 99.99th=[43779] 00:23:47.426 bw ( KiB/s): min=12920, max=19848, per=23.74%, avg=16384.00, stdev=4898.84, samples=2 00:23:47.426 iops : min= 3230, max= 4962, avg=4096.00, stdev=1224.71, samples=2 00:23:47.426 lat (msec) : 2=0.36%, 4=0.01%, 10=2.46%, 20=75.74%, 50=21.43% 00:23:47.426 cpu : usr=2.59%, sys=4.48%, ctx=162, majf=0, minf=1 00:23:47.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:47.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:47.426 issued rwts: total=3711,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.426 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:47.426 job1: (groupid=0, jobs=1): err= 0: pid=332983: Thu May 16 09:36:40 2024 00:23:47.426 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:23:47.426 slat (nsec): min=843, max=15984k, avg=73171.32, stdev=599511.90 00:23:47.426 clat (usec): min=3986, max=48956, avg=9490.43, stdev=6615.08 00:23:47.426 lat (usec): min=3992, max=48961, avg=9563.60, stdev=6676.40 00:23:47.426 clat percentiles (usec): 00:23:47.426 | 1.00th=[ 4555], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 6915], 00:23:47.426 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7242], 60.00th=[ 7373], 00:23:47.426 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[23200], 95.00th=[26608], 00:23:47.426 | 99.00th=[32113], 99.50th=[35914], 99.90th=[44303], 99.95th=[44303], 00:23:47.426 | 99.99th=[49021] 00:23:47.426 write: IOPS=4855, BW=19.0MiB/s (19.9MB/s)(19.0MiB/1004msec); 0 zone resets 00:23:47.426 slat (nsec): min=1435, max=13736k, avg=132022.72, stdev=751995.41 00:23:47.426 clat (usec): min=658, max=85315, avg=16890.33, stdev=20293.80 00:23:47.426 lat (usec): min=3855, max=85325, avg=17022.35, stdev=20436.26 00:23:47.426 clat percentiles (usec): 00:23:47.426 | 1.00th=[ 4293], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 6849], 00:23:47.426 | 30.00th=[ 6980], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7570], 00:23:47.426 | 70.00th=[ 8455], 80.00th=[23987], 90.00th=[51119], 95.00th=[68682], 00:23:47.426 | 99.00th=[81265], 99.50th=[83362], 99.90th=[85459], 99.95th=[85459], 00:23:47.426 | 99.99th=[85459] 00:23:47.426 bw ( KiB/s): min= 6392, max=31584, per=27.52%, avg=18988.00, stdev=17813.43, samples=2 00:23:47.426 iops : min= 1598, max= 7896, avg=4747.00, stdev=4453.36, samples=2 00:23:47.426 lat (usec) : 750=0.01% 00:23:47.426 lat (msec) : 4=0.24%, 10=80.89%, 20=3.09%, 50=10.08%, 100=5.68% 00:23:47.426 cpu : usr=2.09%, sys=4.69%, ctx=426, majf=0, minf=1 00:23:47.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:23:47.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:47.427 issued rwts: total=4608,4875,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.427 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:47.427 job2: (groupid=0, jobs=1): err= 0: pid=332992: Thu May 16 09:36:40 2024 00:23:47.427 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:23:47.427 slat (nsec): min=884, max=25789k, avg=179619.77, stdev=1256278.15 00:23:47.427 clat (usec): min=3492, max=63368, avg=22263.15, stdev=9461.68 00:23:47.427 lat (usec): min=3499, max=63377, avg=22442.77, stdev=9554.13 00:23:47.427 clat percentiles (usec): 00:23:47.427 | 1.00th=[ 6456], 5.00th=[ 7242], 10.00th=[ 7701], 20.00th=[16712], 00:23:47.427 | 30.00th=[19006], 40.00th=[19792], 50.00th=[20317], 60.00th=[22676], 00:23:47.427 | 70.00th=[25560], 80.00th=[29230], 90.00th=[36439], 95.00th=[37487], 00:23:47.427 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50070], 99.95th=[59507], 00:23:47.427 | 99.99th=[63177] 00:23:47.427 write: IOPS=2734, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1006msec); 0 zone resets 00:23:47.427 slat (usec): min=2, max=17843, avg=190.31, stdev=983.21 00:23:47.427 clat (usec): min=4488, max=54145, avg=25521.60, stdev=14124.12 00:23:47.427 lat (usec): min=4537, max=54149, avg=25711.91, stdev=14224.16 00:23:47.427 clat percentiles (usec): 00:23:47.427 | 1.00th=[ 5538], 5.00th=[ 6915], 10.00th=[ 6980], 20.00th=[14091], 00:23:47.427 | 30.00th=[15533], 40.00th=[17433], 50.00th=[21365], 60.00th=[33424], 00:23:47.427 | 70.00th=[37487], 80.00th=[40109], 90.00th=[44827], 95.00th=[46924], 00:23:47.427 | 99.00th=[52167], 99.50th=[52691], 99.90th=[54264], 99.95th=[54264], 00:23:47.427 | 99.99th=[54264] 00:23:47.427 bw ( KiB/s): min= 8448, max=12536, per=15.21%, avg=10492.00, stdev=2890.65, samples=2 00:23:47.427 iops : min= 2112, max= 3134, avg=2623.00, stdev=722.66, samples=2 00:23:47.427 lat (msec) : 4=0.38%, 10=16.16%, 20=29.84%, 50=51.99%, 100=1.64% 00:23:47.427 cpu : usr=1.79%, sys=3.78%, ctx=269, majf=0, minf=1 00:23:47.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:47.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:47.427 issued rwts: total=2560,2751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.427 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:47.427 job3: (groupid=0, jobs=1): err= 0: pid=332997: Thu May 16 09:36:40 2024 00:23:47.427 read: IOPS=4944, BW=19.3MiB/s (20.3MB/s)(19.4MiB/1005msec) 00:23:47.427 slat (nsec): min=953, max=13051k, avg=82200.03, stdev=576390.42 00:23:47.427 clat (usec): min=2845, max=73091, avg=9944.40, stdev=6356.73 00:23:47.427 lat (usec): min=3231, max=73102, avg=10026.60, stdev=6433.82 00:23:47.427 clat percentiles (usec): 00:23:47.427 | 1.00th=[ 3359], 5.00th=[ 5997], 10.00th=[ 7177], 20.00th=[ 8029], 00:23:47.427 | 30.00th=[ 8225], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8586], 00:23:47.427 | 70.00th=[ 8979], 80.00th=[11469], 90.00th=[13304], 95.00th=[16057], 00:23:47.427 | 99.00th=[47449], 99.50th=[60556], 99.90th=[72877], 99.95th=[72877], 00:23:47.427 | 99.99th=[72877] 00:23:47.427 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:23:47.427 slat (nsec): min=1557, max=57049k, avg=96969.22, stdev=933460.77 00:23:47.427 clat (usec): min=588, max=73060, avg=13823.89, stdev=16019.48 00:23:47.427 lat (usec): min=597, max=73064, avg=13920.86, stdev=16117.62 00:23:47.427 clat percentiles (usec): 00:23:47.427 | 1.00th=[ 1434], 5.00th=[ 2802], 10.00th=[ 5538], 20.00th=[ 7832], 00:23:47.427 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8455], 00:23:47.427 | 70.00th=[ 8586], 80.00th=[ 9372], 90.00th=[44827], 95.00th=[58459], 00:23:47.427 | 99.00th=[68682], 99.50th=[68682], 99.90th=[70779], 99.95th=[70779], 00:23:47.427 | 99.99th=[72877] 00:23:47.427 bw ( KiB/s): min=12664, max=32392, per=32.65%, avg=22528.00, stdev=13949.80, samples=2 00:23:47.427 iops : min= 3166, max= 8098, avg=5632.00, stdev=3487.45, samples=2 00:23:47.427 lat (usec) : 750=0.05%, 1000=0.27% 00:23:47.427 lat (msec) : 2=0.65%, 4=3.63%, 10=74.92%, 20=12.10%, 50=3.62% 00:23:47.427 lat (msec) : 100=4.75% 00:23:47.427 cpu : usr=4.78%, sys=4.88%, ctx=405, majf=0, minf=1 00:23:47.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:47.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:47.427 issued rwts: total=4969,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.427 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:47.427 00:23:47.427 Run status group 0 (all jobs): 00:23:47.427 READ: bw=61.5MiB/s (64.5MB/s), 9.94MiB/s-19.3MiB/s (10.4MB/s-20.3MB/s), io=61.9MiB (64.9MB), run=1004-1006msec 00:23:47.427 WRITE: bw=67.4MiB/s (70.7MB/s), 10.7MiB/s-21.9MiB/s (11.2MB/s-23.0MB/s), io=67.8MiB (71.1MB), run=1004-1006msec 00:23:47.427 00:23:47.427 Disk stats (read/write): 00:23:47.427 nvme0n1: ios=3122/3283, merge=0/0, ticks=29732/21710, in_queue=51442, util=88.28% 00:23:47.427 nvme0n2: ios=3247/3584, merge=0/0, ticks=16728/36706, in_queue=53434, util=91.85% 00:23:47.427 nvme0n3: ios=2079/2447, merge=0/0, ticks=23766/28150, in_queue=51916, util=95.16% 00:23:47.427 nvme0n4: ios=3633/4536, merge=0/0, ticks=26817/48865, in_queue=75682, util=100.00% 00:23:47.427 09:36:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:23:47.427 09:36:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=333082 00:23:47.427 09:36:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:23:47.427 09:36:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:23:47.427 [global] 00:23:47.427 thread=1 00:23:47.427 invalidate=1 00:23:47.427 rw=read 00:23:47.427 time_based=1 00:23:47.427 runtime=10 00:23:47.427 ioengine=libaio 00:23:47.427 direct=1 00:23:47.427 bs=4096 00:23:47.427 iodepth=1 00:23:47.427 norandommap=1 00:23:47.427 numjobs=1 00:23:47.427 00:23:47.427 [job0] 00:23:47.427 filename=/dev/nvme0n1 00:23:47.427 [job1] 00:23:47.427 filename=/dev/nvme0n2 00:23:47.427 [job2] 00:23:47.427 filename=/dev/nvme0n3 00:23:47.427 [job3] 00:23:47.427 filename=/dev/nvme0n4 00:23:47.427 Could not set queue depth (nvme0n1) 00:23:47.427 Could not set queue depth (nvme0n2) 00:23:47.427 Could not set queue depth (nvme0n3) 00:23:47.427 Could not set queue depth (nvme0n4) 00:23:47.697 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:47.697 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:47.697 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:47.697 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:47.697 fio-3.35 00:23:47.697 Starting 4 threads 00:23:50.242 09:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:23:50.503 09:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:23:50.503 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=339968, buflen=4096 00:23:50.503 fio: pid=333489, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:50.503 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=6823936, buflen=4096 00:23:50.503 fio: pid=333483, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:50.503 09:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:50.503 09:36:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:23:50.765 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=11698176, buflen=4096 00:23:50.765 fio: pid=333450, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:50.765 09:36:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:50.765 09:36:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:23:50.765 09:36:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:50.765 09:36:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:23:51.027 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=1327104, buflen=4096 00:23:51.027 fio: pid=333465, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:23:51.027 00:23:51.027 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=333450: Thu May 16 09:36:44 2024 00:23:51.027 read: IOPS=983, BW=3934KiB/s (4028kB/s)(11.2MiB/2904msec) 00:23:51.027 slat (usec): min=6, max=28606, avg=38.29, stdev=566.43 00:23:51.027 clat (usec): min=423, max=1227, avg=964.86, stdev=77.12 00:23:51.027 lat (usec): min=433, max=29580, avg=1003.16, stdev=572.46 00:23:51.027 clat percentiles (usec): 00:23:51.027 | 1.00th=[ 734], 5.00th=[ 816], 10.00th=[ 865], 20.00th=[ 906], 00:23:51.027 | 30.00th=[ 938], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 996], 00:23:51.027 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1074], 00:23:51.027 | 99.00th=[ 1123], 99.50th=[ 1139], 99.90th=[ 1205], 99.95th=[ 1205], 00:23:51.027 | 99.99th=[ 1221] 00:23:51.027 bw ( KiB/s): min= 3896, max= 4136, per=63.45%, avg=4006.40, stdev=86.61, samples=5 00:23:51.027 iops : min= 974, max= 1034, avg=1001.60, stdev=21.65, samples=5 00:23:51.027 lat (usec) : 500=0.04%, 750=1.26%, 1000=64.30% 00:23:51.027 lat (msec) : 2=34.37% 00:23:51.027 cpu : usr=0.90%, sys=4.00%, ctx=2860, majf=0, minf=1 00:23:51.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:51.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.027 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.027 issued rwts: total=2857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:51.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:51.027 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=333465: Thu May 16 09:36:44 2024 00:23:51.027 read: IOPS=104, BW=415KiB/s (425kB/s)(1296KiB/3123msec) 00:23:51.027 slat (usec): min=6, max=607, avg=24.98, stdev=35.67 00:23:51.027 clat (usec): min=665, max=42069, avg=9526.77, stdev=16671.09 00:23:51.027 lat (usec): min=672, max=42093, avg=9551.75, stdev=16676.98 00:23:51.027 clat percentiles (usec): 00:23:51.027 | 1.00th=[ 717], 5.00th=[ 816], 10.00th=[ 865], 20.00th=[ 914], 00:23:51.027 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 979], 60.00th=[ 996], 00:23:51.028 | 70.00th=[ 1029], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:23:51.028 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:51.028 | 99.99th=[42206] 00:23:51.028 bw ( KiB/s): min= 96, max= 1795, per=6.00%, avg=379.17, stdev=693.61, samples=6 00:23:51.028 iops : min= 24, max= 448, avg=94.67, stdev=173.10, samples=6 00:23:51.028 lat (usec) : 750=1.85%, 1000=58.46% 00:23:51.028 lat (msec) : 2=18.46%, 50=20.92% 00:23:51.028 cpu : usr=0.06%, sys=0.32%, ctx=327, majf=0, minf=1 00:23:51.028 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:51.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.028 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.028 issued rwts: total=325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:51.028 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:51.028 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=333483: Thu May 16 09:36:44 2024 00:23:51.028 read: IOPS=606, BW=2424KiB/s (2482kB/s)(6664KiB/2749msec) 00:23:51.028 slat (usec): min=5, max=17301, avg=41.48, stdev=516.58 00:23:51.028 clat (usec): min=170, max=42058, avg=1587.51, stdev=5169.36 00:23:51.028 lat (usec): min=177, max=42083, avg=1629.00, stdev=5193.48 00:23:51.028 clat percentiles (usec): 00:23:51.028 | 1.00th=[ 578], 5.00th=[ 693], 10.00th=[ 750], 20.00th=[ 791], 00:23:51.028 | 30.00th=[ 832], 40.00th=[ 906], 50.00th=[ 947], 60.00th=[ 979], 00:23:51.028 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1074], 00:23:51.028 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:23:51.028 | 99.99th=[42206] 00:23:51.028 bw ( KiB/s): min= 96, max= 4040, per=37.43%, avg=2363.20, stdev=2074.00, samples=5 00:23:51.028 iops : min= 24, max= 1010, avg=590.80, stdev=518.50, samples=5 00:23:51.028 lat (usec) : 250=0.06%, 500=0.18%, 750=10.14%, 1000=61.97% 00:23:51.028 lat (msec) : 2=25.73%, 4=0.18%, 50=1.68% 00:23:51.028 cpu : usr=1.16%, sys=2.04%, ctx=1669, majf=0, minf=1 00:23:51.028 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:51.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.028 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.028 issued rwts: total=1667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:51.028 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:51.028 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=333489: Thu May 16 09:36:44 2024 00:23:51.028 read: IOPS=32, BW=128KiB/s (131kB/s)(332KiB/2595msec) 00:23:51.028 slat (nsec): min=8069, max=40767, avg=22519.21, stdev=6287.00 00:23:51.028 clat (usec): min=464, max=42093, avg=30980.21, stdev=18234.41 00:23:51.028 lat (usec): min=474, max=42118, avg=31002.70, stdev=18239.15 00:23:51.028 clat percentiles (usec): 00:23:51.028 | 1.00th=[ 465], 5.00th=[ 734], 10.00th=[ 766], 20.00th=[ 857], 00:23:51.028 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:23:51.028 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:51.028 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:23:51.028 | 99.99th=[42206] 00:23:51.028 bw ( KiB/s): min= 96, max= 256, per=2.04%, avg=129.60, stdev=70.74, samples=5 00:23:51.028 iops : min= 24, max= 64, avg=32.40, stdev=17.69, samples=5 00:23:51.028 lat (usec) : 500=1.19%, 750=8.33%, 1000=15.48% 00:23:51.028 lat (msec) : 2=1.19%, 50=72.62% 00:23:51.028 cpu : usr=0.00%, sys=0.15%, ctx=84, majf=0, minf=2 00:23:51.028 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:51.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.028 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.028 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:51.028 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:51.028 00:23:51.028 Run status group 0 (all jobs): 00:23:51.028 READ: bw=6313KiB/s (6465kB/s), 128KiB/s-3934KiB/s (131kB/s-4028kB/s), io=19.3MiB (20.2MB), run=2595-3123msec 00:23:51.028 00:23:51.028 Disk stats (read/write): 00:23:51.028 nvme0n1: ios=2810/0, merge=0/0, ticks=2679/0, in_queue=2679, util=93.62% 00:23:51.028 nvme0n2: ios=323/0, merge=0/0, ticks=3049/0, in_queue=3049, util=95.73% 00:23:51.028 nvme0n3: ios=1564/0, merge=0/0, ticks=2525/0, in_queue=2525, util=96.03% 00:23:51.028 nvme0n4: ios=84/0, merge=0/0, ticks=2584/0, in_queue=2584, util=96.24% 00:23:51.028 09:36:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:51.028 09:36:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:23:51.289 09:36:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:51.289 09:36:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:23:51.289 09:36:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:51.289 09:36:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:23:51.550 09:36:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:23:51.550 09:36:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:23:51.809 09:36:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:23:51.809 09:36:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 333082 00:23:51.809 09:36:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:23:51.809 09:36:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:51.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:51.809 09:36:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:51.809 09:36:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:23:51.809 09:36:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:23:51.809 09:36:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:51.809 09:36:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:23:51.809 09:36:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:51.809 09:36:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:23:51.809 09:36:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:23:51.809 09:36:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:23:51.809 nvmf hotplug test: fio failed as expected 00:23:51.809 09:36:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:52.071 rmmod nvme_tcp 00:23:52.071 rmmod nvme_fabrics 00:23:52.071 rmmod nvme_keyring 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 329524 ']' 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 329524 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 329524 ']' 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 329524 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 329524 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 329524' 00:23:52.071 killing process with pid 329524 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 329524 00:23:52.071 [2024-05-16 09:36:45.576790] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:52.071 09:36:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 329524 00:23:52.332 09:36:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:52.332 09:36:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:52.332 09:36:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:52.333 09:36:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:52.333 09:36:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:52.333 09:36:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.333 09:36:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.333 09:36:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.248 09:36:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:54.248 00:23:54.248 real 0m28.431s 00:23:54.248 user 2m29.518s 00:23:54.248 sys 0m8.841s 00:23:54.248 09:36:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:54.248 09:36:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.248 ************************************ 00:23:54.248 END TEST nvmf_fio_target 00:23:54.248 ************************************ 00:23:54.510 09:36:47 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:23:54.510 09:36:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:54.510 09:36:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:54.510 09:36:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:54.510 ************************************ 00:23:54.510 START TEST nvmf_bdevio 00:23:54.510 ************************************ 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:23:54.510 * Looking for test storage... 00:23:54.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.510 09:36:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.511 09:36:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.511 09:36:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.511 09:36:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:23:54.511 09:36:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.511 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:23:54.511 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:54.511 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:54.511 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.511 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.511 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.511 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:54.511 09:36:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:54.511 09:36:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:54.511 09:36:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:54.511 09:36:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:54.511 09:36:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:23:54.511 09:36:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:54.511 09:36:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.511 09:36:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:54.511 09:36:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:54.511 09:36:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:54.511 09:36:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.511 09:36:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.511 09:36:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.511 09:36:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:54.511 09:36:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:54.511 09:36:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:23:54.511 09:36:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.106 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:01.107 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:01.107 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:01.107 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:01.107 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.107 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:01.368 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:01.368 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.368 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.368 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.368 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.368 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:01.368 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.368 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.630 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.630 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:01.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:24:01.630 00:24:01.630 --- 10.0.0.2 ping statistics --- 00:24:01.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.630 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:24:01.630 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:24:01.630 00:24:01.630 --- 10.0.0.1 ping statistics --- 00:24:01.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.630 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:24:01.630 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.630 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:24:01.630 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:01.630 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.630 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:01.630 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:01.630 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.630 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:01.630 09:36:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:01.630 09:36:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:01.630 09:36:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.630 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:01.630 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:01.630 09:36:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=338470 00:24:01.630 09:36:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 338470 00:24:01.630 09:36:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:24:01.630 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 338470 ']' 00:24:01.630 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.630 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:01.630 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.630 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:01.630 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:01.630 [2024-05-16 09:36:55.063505] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:24:01.630 [2024-05-16 09:36:55.063566] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.630 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.630 [2024-05-16 09:36:55.150997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:01.892 [2024-05-16 09:36:55.242669] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.892 [2024-05-16 09:36:55.242726] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.892 [2024-05-16 09:36:55.242734] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.892 [2024-05-16 09:36:55.242742] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.892 [2024-05-16 09:36:55.242748] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.892 [2024-05-16 09:36:55.242822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:01.892 [2024-05-16 09:36:55.242952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:01.892 [2024-05-16 09:36:55.243112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.892 [2024-05-16 09:36:55.243112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:02.464 [2024-05-16 09:36:55.912423] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:02.464 Malloc0 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:02.464 [2024-05-16 09:36:55.977061] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:02.464 [2024-05-16 09:36:55.977425] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:02.464 { 00:24:02.464 "params": { 00:24:02.464 "name": "Nvme$subsystem", 00:24:02.464 "trtype": "$TEST_TRANSPORT", 00:24:02.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.464 "adrfam": "ipv4", 00:24:02.464 "trsvcid": "$NVMF_PORT", 00:24:02.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.464 "hdgst": ${hdgst:-false}, 00:24:02.464 "ddgst": ${ddgst:-false} 00:24:02.464 }, 00:24:02.464 "method": "bdev_nvme_attach_controller" 00:24:02.464 } 00:24:02.464 EOF 00:24:02.464 )") 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:24:02.464 09:36:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:02.464 "params": { 00:24:02.464 "name": "Nvme1", 00:24:02.464 "trtype": "tcp", 00:24:02.464 "traddr": "10.0.0.2", 00:24:02.464 "adrfam": "ipv4", 00:24:02.464 "trsvcid": "4420", 00:24:02.464 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.464 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:02.464 "hdgst": false, 00:24:02.464 "ddgst": false 00:24:02.464 }, 00:24:02.464 "method": "bdev_nvme_attach_controller" 00:24:02.464 }' 00:24:02.724 [2024-05-16 09:36:56.042896] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:24:02.725 [2024-05-16 09:36:56.042968] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid338576 ] 00:24:02.725 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.725 [2024-05-16 09:36:56.108721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:02.725 [2024-05-16 09:36:56.184313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.725 [2024-05-16 09:36:56.184433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.725 [2024-05-16 09:36:56.184436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.986 I/O targets: 00:24:02.986 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:02.986 00:24:02.986 00:24:02.986 CUnit - A unit testing framework for C - Version 2.1-3 00:24:02.986 http://cunit.sourceforge.net/ 00:24:02.986 00:24:02.986 00:24:02.986 Suite: bdevio tests on: Nvme1n1 00:24:02.986 Test: blockdev write read block ...passed 00:24:02.986 Test: blockdev write zeroes read block ...passed 00:24:02.986 Test: blockdev write zeroes read no split ...passed 00:24:02.986 Test: blockdev write zeroes read split ...passed 00:24:02.986 Test: blockdev write zeroes read split partial ...passed 00:24:02.986 Test: blockdev reset ...[2024-05-16 09:36:56.505471] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:02.986 [2024-05-16 09:36:56.505536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x891820 (9): Bad file descriptor 00:24:03.248 [2024-05-16 09:36:56.647521] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:03.248 passed 00:24:03.248 Test: blockdev write read 8 blocks ...passed 00:24:03.248 Test: blockdev write read size > 128k ...passed 00:24:03.248 Test: blockdev write read invalid size ...passed 00:24:03.248 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:03.248 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:03.248 Test: blockdev write read max offset ...passed 00:24:03.248 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:03.248 Test: blockdev writev readv 8 blocks ...passed 00:24:03.248 Test: blockdev writev readv 30 x 1block ...passed 00:24:03.509 Test: blockdev writev readv block ...passed 00:24:03.509 Test: blockdev writev readv size > 128k ...passed 00:24:03.509 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:03.509 Test: blockdev comparev and writev ...[2024-05-16 09:36:56.832511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:03.509 [2024-05-16 09:36:56.832539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:03.509 [2024-05-16 09:36:56.832549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:03.509 [2024-05-16 09:36:56.832555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:03.509 [2024-05-16 09:36:56.833063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:03.509 [2024-05-16 09:36:56.833072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:03.509 [2024-05-16 09:36:56.833081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:03.509 [2024-05-16 09:36:56.833087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:03.509 [2024-05-16 09:36:56.833556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:03.509 [2024-05-16 09:36:56.833565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:03.509 [2024-05-16 09:36:56.833575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:03.509 [2024-05-16 09:36:56.833580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:03.509 [2024-05-16 09:36:56.834050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:03.509 [2024-05-16 09:36:56.834062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:03.509 [2024-05-16 09:36:56.834072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:03.509 [2024-05-16 09:36:56.834077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:03.509 passed 00:24:03.509 Test: blockdev nvme passthru rw ...passed 00:24:03.509 Test: blockdev nvme passthru vendor specific ...[2024-05-16 09:36:56.918949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:03.509 [2024-05-16 09:36:56.918959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:03.509 [2024-05-16 09:36:56.919285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:03.509 [2024-05-16 09:36:56.919294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:03.509 [2024-05-16 09:36:56.919601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:03.509 [2024-05-16 09:36:56.919608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:03.509 [2024-05-16 09:36:56.919922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:03.509 [2024-05-16 09:36:56.919931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:03.509 passed 00:24:03.509 Test: blockdev nvme admin passthru ...passed 00:24:03.509 Test: blockdev copy ...passed 00:24:03.509 00:24:03.509 Run Summary: Type Total Ran Passed Failed Inactive 00:24:03.509 suites 1 1 n/a 0 0 00:24:03.509 tests 23 23 23 0 0 00:24:03.509 asserts 152 152 152 0 n/a 00:24:03.509 00:24:03.509 Elapsed time = 1.313 seconds 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:03.771 rmmod nvme_tcp 00:24:03.771 rmmod nvme_fabrics 00:24:03.771 rmmod nvme_keyring 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 338470 ']' 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 338470 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 338470 ']' 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 338470 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 338470 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 338470' 00:24:03.771 killing process with pid 338470 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 338470 00:24:03.771 [2024-05-16 09:36:57.230761] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:03.771 09:36:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 338470 00:24:04.033 09:36:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:04.033 09:36:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:04.033 09:36:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:04.033 09:36:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:04.033 09:36:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:04.033 09:36:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.033 09:36:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.033 09:36:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.951 09:36:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:05.951 00:24:05.951 real 0m11.583s 00:24:05.951 user 0m12.596s 00:24:05.951 sys 0m5.728s 00:24:05.952 09:36:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:05.952 09:36:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:05.952 ************************************ 00:24:05.952 END TEST nvmf_bdevio 00:24:05.952 ************************************ 00:24:05.952 09:36:59 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:24:05.952 09:36:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:05.952 09:36:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:05.952 09:36:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:06.215 ************************************ 00:24:06.215 START TEST nvmf_auth_target 00:24:06.215 ************************************ 00:24:06.215 09:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:24:06.215 * Looking for test storage... 00:24:06.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:06.215 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.215 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:24:06.215 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.215 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.215 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.215 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.215 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.215 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:06.216 09:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:14.366 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:14.366 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:14.366 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:14.367 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:14.367 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:14.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:24:14.367 00:24:14.367 --- 10.0.0.2 ping statistics --- 00:24:14.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.367 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:24:14.367 00:24:14.367 --- 10.0.0.1 ping statistics --- 00:24:14.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.367 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=342901 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 342901 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 342901 ']' 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:14.367 09:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.367 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:14.367 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:24:14.367 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:14.367 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.367 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.367 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.367 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=343246 00:24:14.367 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=75b3af7c590cf53063429b1f1e575af1275dfb419fb7f0d0 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Yp9 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 75b3af7c590cf53063429b1f1e575af1275dfb419fb7f0d0 0 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 75b3af7c590cf53063429b1f1e575af1275dfb419fb7f0d0 0 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=75b3af7c590cf53063429b1f1e575af1275dfb419fb7f0d0 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Yp9 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Yp9 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.Yp9 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4fb8939227f4710457d3531ec2247e3f 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.mG0 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4fb8939227f4710457d3531ec2247e3f 1 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4fb8939227f4710457d3531ec2247e3f 1 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4fb8939227f4710457d3531ec2247e3f 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.mG0 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.mG0 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.mG0 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=742a34fb9728788f83f219a57bab99583c8de4e1f415c820 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Kxc 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 742a34fb9728788f83f219a57bab99583c8de4e1f415c820 2 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 742a34fb9728788f83f219a57bab99583c8de4e1f415c820 2 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=742a34fb9728788f83f219a57bab99583c8de4e1f415c820 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Kxc 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Kxc 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.Kxc 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ba7842d70db97df95254d03970e2e4e501880aded8733b74852ce1e296da3fa3 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.y5D 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ba7842d70db97df95254d03970e2e4e501880aded8733b74852ce1e296da3fa3 3 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ba7842d70db97df95254d03970e2e4e501880aded8733b74852ce1e296da3fa3 3 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ba7842d70db97df95254d03970e2e4e501880aded8733b74852ce1e296da3fa3 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:24:14.368 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:24:14.369 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.y5D 00:24:14.369 09:37:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.y5D 00:24:14.369 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.y5D 00:24:14.369 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 342901 00:24:14.369 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 342901 ']' 00:24:14.369 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.369 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:14.369 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.369 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:14.369 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.631 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:14.631 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:24:14.631 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 343246 /var/tmp/host.sock 00:24:14.631 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 343246 ']' 00:24:14.631 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:24:14.631 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:14.631 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:24:14.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:24:14.631 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:14.631 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.631 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:14.631 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:24:14.631 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:24:14.631 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.631 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.892 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.892 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:24:14.892 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Yp9 00:24:14.892 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.892 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.892 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.892 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Yp9 00:24:14.892 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Yp9 00:24:14.892 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:24:14.892 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.mG0 00:24:14.892 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.892 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.892 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.892 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.mG0 00:24:14.892 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.mG0 00:24:15.154 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:24:15.154 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Kxc 00:24:15.154 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.154 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.154 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.154 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Kxc 00:24:15.154 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Kxc 00:24:15.154 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:24:15.154 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.y5D 00:24:15.154 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.154 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.154 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.154 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.y5D 00:24:15.154 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.y5D 00:24:15.416 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:24:15.416 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:24:15.416 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:15.416 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:15.416 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:15.679 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:24:15.679 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:15.679 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:15.679 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:15.679 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:15.679 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:24:15.679 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.679 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.679 09:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.679 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:15.679 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:15.679 00:24:15.679 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:15.679 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:15.679 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:15.941 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.941 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:15.941 09:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.941 09:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.941 09:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.941 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:15.941 { 00:24:15.941 "cntlid": 1, 00:24:15.941 "qid": 0, 00:24:15.941 "state": "enabled", 00:24:15.941 "listen_address": { 00:24:15.941 "trtype": "TCP", 00:24:15.941 "adrfam": "IPv4", 00:24:15.941 "traddr": "10.0.0.2", 00:24:15.941 "trsvcid": "4420" 00:24:15.941 }, 00:24:15.941 "peer_address": { 00:24:15.941 "trtype": "TCP", 00:24:15.941 "adrfam": "IPv4", 00:24:15.941 "traddr": "10.0.0.1", 00:24:15.941 "trsvcid": "44948" 00:24:15.941 }, 00:24:15.941 "auth": { 00:24:15.941 "state": "completed", 00:24:15.941 "digest": "sha256", 00:24:15.941 "dhgroup": "null" 00:24:15.941 } 00:24:15.941 } 00:24:15.941 ]' 00:24:15.941 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:15.941 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:15.941 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:15.941 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:24:15.941 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:16.212 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:16.212 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:16.212 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:16.213 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:20.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:20.428 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:20.428 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:20.689 09:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.689 09:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:20.689 09:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.689 09:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.689 09:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.689 09:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:20.689 { 00:24:20.689 "cntlid": 3, 00:24:20.689 "qid": 0, 00:24:20.689 "state": "enabled", 00:24:20.689 "listen_address": { 00:24:20.689 "trtype": "TCP", 00:24:20.689 "adrfam": "IPv4", 00:24:20.689 "traddr": "10.0.0.2", 00:24:20.689 "trsvcid": "4420" 00:24:20.689 }, 00:24:20.689 "peer_address": { 00:24:20.689 "trtype": "TCP", 00:24:20.689 "adrfam": "IPv4", 00:24:20.689 "traddr": "10.0.0.1", 00:24:20.689 "trsvcid": "44972" 00:24:20.689 }, 00:24:20.689 "auth": { 00:24:20.689 "state": "completed", 00:24:20.689 "digest": "sha256", 00:24:20.689 "dhgroup": "null" 00:24:20.689 } 00:24:20.689 } 00:24:20.689 ]' 00:24:20.689 09:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:20.689 09:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:20.689 09:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:20.689 09:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:24:20.689 09:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:20.689 09:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:20.689 09:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:20.689 09:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:20.950 09:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NGZiODkzOTIyN2Y0NzEwNDU3ZDM1MzFlYzIyNDdlM2YaZLfC: 00:24:21.526 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:21.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:21.788 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:21.788 09:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.788 09:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.788 09:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.788 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:21.788 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:21.788 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:21.788 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:24:21.788 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:21.788 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:21.788 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:21.788 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:21.788 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 00:24:21.788 09:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.788 09:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.788 09:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.788 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:21.788 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:22.049 00:24:22.049 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:22.049 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:22.049 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:22.310 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.310 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:22.310 09:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.310 09:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.310 09:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.310 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:22.310 { 00:24:22.310 "cntlid": 5, 00:24:22.310 "qid": 0, 00:24:22.310 "state": "enabled", 00:24:22.310 "listen_address": { 00:24:22.310 "trtype": "TCP", 00:24:22.310 "adrfam": "IPv4", 00:24:22.310 "traddr": "10.0.0.2", 00:24:22.310 "trsvcid": "4420" 00:24:22.310 }, 00:24:22.310 "peer_address": { 00:24:22.310 "trtype": "TCP", 00:24:22.310 "adrfam": "IPv4", 00:24:22.310 "traddr": "10.0.0.1", 00:24:22.310 "trsvcid": "57656" 00:24:22.310 }, 00:24:22.310 "auth": { 00:24:22.310 "state": "completed", 00:24:22.310 "digest": "sha256", 00:24:22.310 "dhgroup": "null" 00:24:22.310 } 00:24:22.310 } 00:24:22.310 ]' 00:24:22.310 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:22.310 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:22.310 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:22.310 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:24:22.310 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:22.310 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:22.310 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:22.310 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:22.572 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:NzQyYTM0ZmI5NzI4Nzg4ZjgzZjIxOWE1N2JhYjk5NTgzYzhkZTRlMWY0MTVjODIwkAOM9w==: 00:24:23.514 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:23.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:23.515 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:23.515 09:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.515 09:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.515 09:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.515 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:23.515 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:23.515 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:24:23.515 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:24:23.515 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:23.515 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:23.515 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:23.515 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:23.515 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:24:23.515 09:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.515 09:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.515 09:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.515 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:23.515 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:23.777 00:24:23.777 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:23.777 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:23.777 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:23.777 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.777 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:23.777 09:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.777 09:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.777 09:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.777 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:23.777 { 00:24:23.777 "cntlid": 7, 00:24:23.777 "qid": 0, 00:24:23.777 "state": "enabled", 00:24:23.777 "listen_address": { 00:24:23.777 "trtype": "TCP", 00:24:23.777 "adrfam": "IPv4", 00:24:23.777 "traddr": "10.0.0.2", 00:24:23.777 "trsvcid": "4420" 00:24:23.777 }, 00:24:23.777 "peer_address": { 00:24:23.777 "trtype": "TCP", 00:24:23.777 "adrfam": "IPv4", 00:24:23.777 "traddr": "10.0.0.1", 00:24:23.777 "trsvcid": "57676" 00:24:23.777 }, 00:24:23.777 "auth": { 00:24:23.777 "state": "completed", 00:24:23.777 "digest": "sha256", 00:24:23.777 "dhgroup": "null" 00:24:23.777 } 00:24:23.777 } 00:24:23.777 ]' 00:24:23.777 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:24.039 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:24.039 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:24.039 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:24:24.039 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:24.039 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:24.039 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:24.039 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:24.300 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:YmE3ODQyZDcwZGI5N2RmOTUyNTRkMDM5NzBlMmU0ZTUwMTg4MGFkZWQ4NzMzYjc0ODUyY2UxZTI5NmRhM2ZhM3KzMj0=: 00:24:24.872 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:24.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:24.872 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:24.872 09:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.872 09:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.872 09:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.872 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:24:24.872 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:24.872 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:24.873 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:25.134 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:24:25.134 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:25.134 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:25.134 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:25.134 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:25.134 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:24:25.134 09:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.134 09:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.134 09:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.134 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:25.134 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:25.395 00:24:25.395 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:25.395 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:25.395 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:25.395 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.657 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:25.657 09:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.657 09:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.657 09:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.657 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:25.657 { 00:24:25.657 "cntlid": 9, 00:24:25.657 "qid": 0, 00:24:25.657 "state": "enabled", 00:24:25.657 "listen_address": { 00:24:25.657 "trtype": "TCP", 00:24:25.657 "adrfam": "IPv4", 00:24:25.657 "traddr": "10.0.0.2", 00:24:25.657 "trsvcid": "4420" 00:24:25.657 }, 00:24:25.657 "peer_address": { 00:24:25.657 "trtype": "TCP", 00:24:25.657 "adrfam": "IPv4", 00:24:25.657 "traddr": "10.0.0.1", 00:24:25.657 "trsvcid": "57706" 00:24:25.657 }, 00:24:25.657 "auth": { 00:24:25.657 "state": "completed", 00:24:25.657 "digest": "sha256", 00:24:25.657 "dhgroup": "ffdhe2048" 00:24:25.657 } 00:24:25.657 } 00:24:25.657 ]' 00:24:25.657 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:25.657 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:25.657 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:25.657 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:25.657 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:25.657 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:25.657 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:25.657 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:25.919 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:24:26.491 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:26.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:26.491 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:26.491 09:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.491 09:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.491 09:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.491 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:26.491 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:26.491 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:26.752 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:24:26.752 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:26.752 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:26.752 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:26.752 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:26.752 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:24:26.752 09:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.752 09:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.752 09:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.752 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:26.752 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:27.014 00:24:27.014 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:27.014 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:27.014 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:27.275 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.275 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:27.275 09:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.275 09:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.275 09:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.275 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:27.275 { 00:24:27.275 "cntlid": 11, 00:24:27.275 "qid": 0, 00:24:27.275 "state": "enabled", 00:24:27.275 "listen_address": { 00:24:27.275 "trtype": "TCP", 00:24:27.275 "adrfam": "IPv4", 00:24:27.275 "traddr": "10.0.0.2", 00:24:27.275 "trsvcid": "4420" 00:24:27.275 }, 00:24:27.275 "peer_address": { 00:24:27.275 "trtype": "TCP", 00:24:27.275 "adrfam": "IPv4", 00:24:27.275 "traddr": "10.0.0.1", 00:24:27.275 "trsvcid": "57728" 00:24:27.275 }, 00:24:27.275 "auth": { 00:24:27.275 "state": "completed", 00:24:27.275 "digest": "sha256", 00:24:27.275 "dhgroup": "ffdhe2048" 00:24:27.275 } 00:24:27.275 } 00:24:27.275 ]' 00:24:27.275 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:27.275 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:27.275 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:27.275 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:27.275 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:27.275 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:27.275 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:27.275 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:27.536 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NGZiODkzOTIyN2Y0NzEwNDU3ZDM1MzFlYzIyNDdlM2YaZLfC: 00:24:28.108 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:28.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:28.369 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:28.369 09:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.369 09:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.369 09:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.369 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:28.369 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:28.369 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:28.369 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:24:28.369 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:28.369 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:28.369 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:28.369 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:28.369 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 00:24:28.369 09:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.369 09:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.369 09:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.369 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:28.369 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:28.630 00:24:28.630 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:28.630 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:28.630 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:28.890 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.890 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:28.890 09:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.890 09:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.890 09:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.890 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:28.890 { 00:24:28.890 "cntlid": 13, 00:24:28.890 "qid": 0, 00:24:28.890 "state": "enabled", 00:24:28.890 "listen_address": { 00:24:28.890 "trtype": "TCP", 00:24:28.890 "adrfam": "IPv4", 00:24:28.890 "traddr": "10.0.0.2", 00:24:28.890 "trsvcid": "4420" 00:24:28.890 }, 00:24:28.890 "peer_address": { 00:24:28.890 "trtype": "TCP", 00:24:28.890 "adrfam": "IPv4", 00:24:28.890 "traddr": "10.0.0.1", 00:24:28.890 "trsvcid": "57756" 00:24:28.890 }, 00:24:28.890 "auth": { 00:24:28.890 "state": "completed", 00:24:28.890 "digest": "sha256", 00:24:28.890 "dhgroup": "ffdhe2048" 00:24:28.890 } 00:24:28.890 } 00:24:28.890 ]' 00:24:28.890 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:28.890 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:28.890 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:28.890 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:28.890 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:28.890 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:28.890 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:28.890 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:29.151 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:NzQyYTM0ZmI5NzI4Nzg4ZjgzZjIxOWE1N2JhYjk5NTgzYzhkZTRlMWY0MTVjODIwkAOM9w==: 00:24:29.723 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:29.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:29.985 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:29.985 09:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.985 09:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.985 09:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.985 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:29.985 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:29.985 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:29.985 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:24:29.985 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:29.985 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:29.985 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:29.985 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:29.985 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:24:29.985 09:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.985 09:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.985 09:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.985 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:29.985 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:30.244 00:24:30.244 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:30.244 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:30.244 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:30.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:30.505 09:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.505 09:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.505 09:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:30.505 { 00:24:30.505 "cntlid": 15, 00:24:30.505 "qid": 0, 00:24:30.505 "state": "enabled", 00:24:30.505 "listen_address": { 00:24:30.505 "trtype": "TCP", 00:24:30.505 "adrfam": "IPv4", 00:24:30.505 "traddr": "10.0.0.2", 00:24:30.505 "trsvcid": "4420" 00:24:30.505 }, 00:24:30.505 "peer_address": { 00:24:30.505 "trtype": "TCP", 00:24:30.505 "adrfam": "IPv4", 00:24:30.505 "traddr": "10.0.0.1", 00:24:30.505 "trsvcid": "57786" 00:24:30.505 }, 00:24:30.505 "auth": { 00:24:30.505 "state": "completed", 00:24:30.505 "digest": "sha256", 00:24:30.505 "dhgroup": "ffdhe2048" 00:24:30.505 } 00:24:30.505 } 00:24:30.505 ]' 00:24:30.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:30.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:30.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:30.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:30.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:30.505 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:30.505 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:30.505 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:30.766 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:YmE3ODQyZDcwZGI5N2RmOTUyNTRkMDM5NzBlMmU0ZTUwMTg4MGFkZWQ4NzMzYjc0ODUyY2UxZTI5NmRhM2ZhM3KzMj0=: 00:24:31.873 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:31.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:31.873 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:31.873 09:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.873 09:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.874 09:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.874 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:24:31.874 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:31.874 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:31.874 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:31.874 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:24:31.874 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:31.874 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:31.874 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:31.874 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:31.874 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:24:31.874 09:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.874 09:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.874 09:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.874 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:31.874 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:31.874 00:24:31.874 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:31.874 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:31.874 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:32.143 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.143 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:32.143 09:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.143 09:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.143 09:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.143 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:32.143 { 00:24:32.143 "cntlid": 17, 00:24:32.143 "qid": 0, 00:24:32.143 "state": "enabled", 00:24:32.143 "listen_address": { 00:24:32.143 "trtype": "TCP", 00:24:32.143 "adrfam": "IPv4", 00:24:32.143 "traddr": "10.0.0.2", 00:24:32.143 "trsvcid": "4420" 00:24:32.143 }, 00:24:32.143 "peer_address": { 00:24:32.143 "trtype": "TCP", 00:24:32.143 "adrfam": "IPv4", 00:24:32.143 "traddr": "10.0.0.1", 00:24:32.143 "trsvcid": "33684" 00:24:32.143 }, 00:24:32.143 "auth": { 00:24:32.143 "state": "completed", 00:24:32.143 "digest": "sha256", 00:24:32.143 "dhgroup": "ffdhe3072" 00:24:32.143 } 00:24:32.143 } 00:24:32.143 ]' 00:24:32.143 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:32.143 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:32.143 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:32.143 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:32.143 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:32.143 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:32.144 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:32.144 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:32.404 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:24:32.983 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:32.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:32.983 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:33.250 09:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.250 09:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.250 09:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.250 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:33.250 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:33.250 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:33.250 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:24:33.250 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:33.250 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:33.250 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:33.250 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:33.250 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:24:33.250 09:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.250 09:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.250 09:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.250 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:33.250 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:33.518 00:24:33.518 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:33.518 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:33.518 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:33.787 09:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.787 09:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:33.787 09:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.787 09:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.787 09:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.787 09:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:33.787 { 00:24:33.787 "cntlid": 19, 00:24:33.787 "qid": 0, 00:24:33.787 "state": "enabled", 00:24:33.787 "listen_address": { 00:24:33.787 "trtype": "TCP", 00:24:33.787 "adrfam": "IPv4", 00:24:33.787 "traddr": "10.0.0.2", 00:24:33.787 "trsvcid": "4420" 00:24:33.787 }, 00:24:33.787 "peer_address": { 00:24:33.787 "trtype": "TCP", 00:24:33.788 "adrfam": "IPv4", 00:24:33.788 "traddr": "10.0.0.1", 00:24:33.788 "trsvcid": "33716" 00:24:33.788 }, 00:24:33.788 "auth": { 00:24:33.788 "state": "completed", 00:24:33.788 "digest": "sha256", 00:24:33.788 "dhgroup": "ffdhe3072" 00:24:33.788 } 00:24:33.788 } 00:24:33.788 ]' 00:24:33.788 09:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:33.788 09:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:33.788 09:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:33.788 09:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:33.788 09:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:33.788 09:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:33.788 09:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:33.788 09:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:34.064 09:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NGZiODkzOTIyN2Y0NzEwNDU3ZDM1MzFlYzIyNDdlM2YaZLfC: 00:24:34.685 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:34.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:34.685 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:34.685 09:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.685 09:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.685 09:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.685 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:34.685 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:34.685 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:34.963 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:24:34.963 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:34.963 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:34.963 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:34.963 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:34.963 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 00:24:34.963 09:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.963 09:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.963 09:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.963 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:34.963 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:35.231 00:24:35.231 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:35.231 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:35.231 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:35.231 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.231 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:35.231 09:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.231 09:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.231 09:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.231 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:35.231 { 00:24:35.231 "cntlid": 21, 00:24:35.231 "qid": 0, 00:24:35.231 "state": "enabled", 00:24:35.231 "listen_address": { 00:24:35.231 "trtype": "TCP", 00:24:35.231 "adrfam": "IPv4", 00:24:35.231 "traddr": "10.0.0.2", 00:24:35.231 "trsvcid": "4420" 00:24:35.231 }, 00:24:35.231 "peer_address": { 00:24:35.231 "trtype": "TCP", 00:24:35.231 "adrfam": "IPv4", 00:24:35.231 "traddr": "10.0.0.1", 00:24:35.231 "trsvcid": "33738" 00:24:35.231 }, 00:24:35.231 "auth": { 00:24:35.232 "state": "completed", 00:24:35.232 "digest": "sha256", 00:24:35.232 "dhgroup": "ffdhe3072" 00:24:35.232 } 00:24:35.232 } 00:24:35.232 ]' 00:24:35.232 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:35.501 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:35.501 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:35.501 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:35.501 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:35.501 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:35.501 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:35.501 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:35.770 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:NzQyYTM0ZmI5NzI4Nzg4ZjgzZjIxOWE1N2JhYjk5NTgzYzhkZTRlMWY0MTVjODIwkAOM9w==: 00:24:36.357 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:36.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:36.357 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:36.357 09:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.357 09:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.357 09:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.357 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:36.357 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:36.357 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:36.634 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:24:36.634 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:36.634 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:36.634 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:36.634 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:36.634 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:24:36.634 09:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.634 09:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.634 09:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.634 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:36.635 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:36.635 00:24:36.927 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:36.927 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:36.927 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:36.927 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.927 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:36.927 09:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.927 09:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.927 09:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.927 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:36.927 { 00:24:36.927 "cntlid": 23, 00:24:36.927 "qid": 0, 00:24:36.927 "state": "enabled", 00:24:36.927 "listen_address": { 00:24:36.927 "trtype": "TCP", 00:24:36.927 "adrfam": "IPv4", 00:24:36.927 "traddr": "10.0.0.2", 00:24:36.927 "trsvcid": "4420" 00:24:36.927 }, 00:24:36.927 "peer_address": { 00:24:36.927 "trtype": "TCP", 00:24:36.927 "adrfam": "IPv4", 00:24:36.927 "traddr": "10.0.0.1", 00:24:36.927 "trsvcid": "33760" 00:24:36.927 }, 00:24:36.927 "auth": { 00:24:36.927 "state": "completed", 00:24:36.927 "digest": "sha256", 00:24:36.927 "dhgroup": "ffdhe3072" 00:24:36.927 } 00:24:36.927 } 00:24:36.927 ]' 00:24:36.927 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:36.927 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:36.927 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:37.200 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:37.200 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:37.200 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:37.200 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:37.200 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:37.200 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:YmE3ODQyZDcwZGI5N2RmOTUyNTRkMDM5NzBlMmU0ZTUwMTg4MGFkZWQ4NzMzYjc0ODUyY2UxZTI5NmRhM2ZhM3KzMj0=: 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:38.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:38.205 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:38.477 00:24:38.477 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:38.477 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:38.477 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:38.753 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.753 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:38.753 09:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.753 09:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.753 09:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.753 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:38.753 { 00:24:38.753 "cntlid": 25, 00:24:38.753 "qid": 0, 00:24:38.753 "state": "enabled", 00:24:38.753 "listen_address": { 00:24:38.753 "trtype": "TCP", 00:24:38.753 "adrfam": "IPv4", 00:24:38.753 "traddr": "10.0.0.2", 00:24:38.753 "trsvcid": "4420" 00:24:38.753 }, 00:24:38.753 "peer_address": { 00:24:38.753 "trtype": "TCP", 00:24:38.753 "adrfam": "IPv4", 00:24:38.753 "traddr": "10.0.0.1", 00:24:38.753 "trsvcid": "33766" 00:24:38.753 }, 00:24:38.753 "auth": { 00:24:38.753 "state": "completed", 00:24:38.753 "digest": "sha256", 00:24:38.753 "dhgroup": "ffdhe4096" 00:24:38.753 } 00:24:38.753 } 00:24:38.753 ]' 00:24:38.753 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:38.753 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:38.753 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:38.753 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:38.753 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:38.753 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:38.753 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:38.753 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:39.032 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:24:39.665 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:39.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:39.665 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:39.665 09:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.665 09:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.665 09:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.665 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:39.665 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:39.665 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:39.937 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:24:39.937 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:39.937 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:39.937 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:39.937 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:39.937 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:24:39.937 09:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.937 09:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.937 09:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.937 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:39.937 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:40.208 00:24:40.208 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:40.208 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:40.208 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:40.208 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.208 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:40.208 09:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.208 09:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.208 09:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.208 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:40.208 { 00:24:40.208 "cntlid": 27, 00:24:40.208 "qid": 0, 00:24:40.208 "state": "enabled", 00:24:40.208 "listen_address": { 00:24:40.208 "trtype": "TCP", 00:24:40.208 "adrfam": "IPv4", 00:24:40.208 "traddr": "10.0.0.2", 00:24:40.208 "trsvcid": "4420" 00:24:40.208 }, 00:24:40.208 "peer_address": { 00:24:40.208 "trtype": "TCP", 00:24:40.208 "adrfam": "IPv4", 00:24:40.208 "traddr": "10.0.0.1", 00:24:40.208 "trsvcid": "33790" 00:24:40.208 }, 00:24:40.208 "auth": { 00:24:40.208 "state": "completed", 00:24:40.208 "digest": "sha256", 00:24:40.208 "dhgroup": "ffdhe4096" 00:24:40.208 } 00:24:40.208 } 00:24:40.208 ]' 00:24:40.208 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:40.208 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:40.208 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:40.486 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:40.486 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:40.486 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:40.486 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:40.486 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:40.486 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NGZiODkzOTIyN2Y0NzEwNDU3ZDM1MzFlYzIyNDdlM2YaZLfC: 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:41.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:41.489 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:41.766 00:24:41.766 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:41.766 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:41.766 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:42.036 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.036 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:42.036 09:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.036 09:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.036 09:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.036 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:42.036 { 00:24:42.036 "cntlid": 29, 00:24:42.036 "qid": 0, 00:24:42.036 "state": "enabled", 00:24:42.036 "listen_address": { 00:24:42.036 "trtype": "TCP", 00:24:42.036 "adrfam": "IPv4", 00:24:42.036 "traddr": "10.0.0.2", 00:24:42.036 "trsvcid": "4420" 00:24:42.036 }, 00:24:42.036 "peer_address": { 00:24:42.036 "trtype": "TCP", 00:24:42.036 "adrfam": "IPv4", 00:24:42.036 "traddr": "10.0.0.1", 00:24:42.036 "trsvcid": "33812" 00:24:42.036 }, 00:24:42.036 "auth": { 00:24:42.036 "state": "completed", 00:24:42.036 "digest": "sha256", 00:24:42.036 "dhgroup": "ffdhe4096" 00:24:42.036 } 00:24:42.036 } 00:24:42.036 ]' 00:24:42.036 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:42.036 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:42.036 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:42.036 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:42.036 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:42.036 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:42.036 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:42.036 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:42.314 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:NzQyYTM0ZmI5NzI4Nzg4ZjgzZjIxOWE1N2JhYjk5NTgzYzhkZTRlMWY0MTVjODIwkAOM9w==: 00:24:42.911 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:42.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:42.911 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:42.911 09:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.911 09:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.911 09:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.911 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:42.911 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:42.911 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:43.189 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:24:43.189 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:43.189 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:43.189 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:43.189 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:43.189 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:24:43.189 09:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.189 09:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.189 09:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.189 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:43.189 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:43.466 00:24:43.466 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:43.466 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:43.466 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:43.466 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.466 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:43.466 09:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.466 09:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.466 09:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.466 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:43.466 { 00:24:43.466 "cntlid": 31, 00:24:43.466 "qid": 0, 00:24:43.466 "state": "enabled", 00:24:43.466 "listen_address": { 00:24:43.466 "trtype": "TCP", 00:24:43.466 "adrfam": "IPv4", 00:24:43.466 "traddr": "10.0.0.2", 00:24:43.466 "trsvcid": "4420" 00:24:43.466 }, 00:24:43.466 "peer_address": { 00:24:43.466 "trtype": "TCP", 00:24:43.466 "adrfam": "IPv4", 00:24:43.466 "traddr": "10.0.0.1", 00:24:43.466 "trsvcid": "46134" 00:24:43.466 }, 00:24:43.466 "auth": { 00:24:43.466 "state": "completed", 00:24:43.466 "digest": "sha256", 00:24:43.466 "dhgroup": "ffdhe4096" 00:24:43.466 } 00:24:43.466 } 00:24:43.466 ]' 00:24:43.466 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:43.739 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:43.739 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:43.739 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:43.739 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:43.739 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:43.739 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:43.739 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:44.011 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:YmE3ODQyZDcwZGI5N2RmOTUyNTRkMDM5NzBlMmU0ZTUwMTg4MGFkZWQ4NzMzYjc0ODUyY2UxZTI5NmRhM2ZhM3KzMj0=: 00:24:44.613 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:44.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:44.613 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:44.613 09:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.613 09:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:44.613 09:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.613 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:24:44.613 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:44.613 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:44.613 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:44.613 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:24:44.613 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:44.613 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:44.613 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:44.613 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:44.613 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:24:44.613 09:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.613 09:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:44.895 09:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.895 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:44.895 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:45.181 00:24:45.181 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:45.181 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:45.181 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:45.181 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.181 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:45.181 09:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.181 09:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.181 09:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.181 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:45.181 { 00:24:45.181 "cntlid": 33, 00:24:45.181 "qid": 0, 00:24:45.181 "state": "enabled", 00:24:45.181 "listen_address": { 00:24:45.181 "trtype": "TCP", 00:24:45.181 "adrfam": "IPv4", 00:24:45.181 "traddr": "10.0.0.2", 00:24:45.181 "trsvcid": "4420" 00:24:45.181 }, 00:24:45.181 "peer_address": { 00:24:45.181 "trtype": "TCP", 00:24:45.181 "adrfam": "IPv4", 00:24:45.181 "traddr": "10.0.0.1", 00:24:45.181 "trsvcid": "46152" 00:24:45.181 }, 00:24:45.181 "auth": { 00:24:45.181 "state": "completed", 00:24:45.181 "digest": "sha256", 00:24:45.181 "dhgroup": "ffdhe6144" 00:24:45.181 } 00:24:45.181 } 00:24:45.181 ]' 00:24:45.181 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:45.181 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:45.181 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:45.462 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:45.462 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:45.462 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:45.462 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:45.462 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:45.462 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:46.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:46.459 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:46.745 00:24:46.745 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:46.745 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:46.745 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:47.029 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.029 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:47.029 09:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.029 09:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.029 09:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.029 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:47.029 { 00:24:47.029 "cntlid": 35, 00:24:47.029 "qid": 0, 00:24:47.029 "state": "enabled", 00:24:47.029 "listen_address": { 00:24:47.029 "trtype": "TCP", 00:24:47.029 "adrfam": "IPv4", 00:24:47.029 "traddr": "10.0.0.2", 00:24:47.029 "trsvcid": "4420" 00:24:47.029 }, 00:24:47.029 "peer_address": { 00:24:47.029 "trtype": "TCP", 00:24:47.029 "adrfam": "IPv4", 00:24:47.029 "traddr": "10.0.0.1", 00:24:47.029 "trsvcid": "46188" 00:24:47.029 }, 00:24:47.030 "auth": { 00:24:47.030 "state": "completed", 00:24:47.030 "digest": "sha256", 00:24:47.030 "dhgroup": "ffdhe6144" 00:24:47.030 } 00:24:47.030 } 00:24:47.030 ]' 00:24:47.030 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:47.030 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:47.030 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:47.030 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:47.030 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:47.030 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:47.030 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:47.030 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:47.298 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NGZiODkzOTIyN2Y0NzEwNDU3ZDM1MzFlYzIyNDdlM2YaZLfC: 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:48.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:48.277 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:48.557 00:24:48.557 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:48.557 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:48.558 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:48.831 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.831 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:48.831 09:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.831 09:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.831 09:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.831 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:48.831 { 00:24:48.831 "cntlid": 37, 00:24:48.832 "qid": 0, 00:24:48.832 "state": "enabled", 00:24:48.832 "listen_address": { 00:24:48.832 "trtype": "TCP", 00:24:48.832 "adrfam": "IPv4", 00:24:48.832 "traddr": "10.0.0.2", 00:24:48.832 "trsvcid": "4420" 00:24:48.832 }, 00:24:48.832 "peer_address": { 00:24:48.832 "trtype": "TCP", 00:24:48.832 "adrfam": "IPv4", 00:24:48.832 "traddr": "10.0.0.1", 00:24:48.832 "trsvcid": "46206" 00:24:48.832 }, 00:24:48.832 "auth": { 00:24:48.832 "state": "completed", 00:24:48.832 "digest": "sha256", 00:24:48.832 "dhgroup": "ffdhe6144" 00:24:48.832 } 00:24:48.832 } 00:24:48.832 ]' 00:24:48.832 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:48.832 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:48.832 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:48.832 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:48.832 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:48.832 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:48.832 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:48.832 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:49.110 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:NzQyYTM0ZmI5NzI4Nzg4ZjgzZjIxOWE1N2JhYjk5NTgzYzhkZTRlMWY0MTVjODIwkAOM9w==: 00:24:49.707 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:49.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:49.707 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:49.707 09:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.707 09:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.707 09:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.707 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:49.707 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:49.707 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:49.981 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:24:49.981 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:49.981 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:49.981 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:49.981 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:49.981 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:24:49.981 09:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.981 09:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.981 09:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.981 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:49.981 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:50.260 00:24:50.260 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:50.260 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:50.260 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:50.538 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.538 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:50.538 09:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.538 09:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.538 09:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.538 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:50.538 { 00:24:50.538 "cntlid": 39, 00:24:50.538 "qid": 0, 00:24:50.538 "state": "enabled", 00:24:50.538 "listen_address": { 00:24:50.538 "trtype": "TCP", 00:24:50.538 "adrfam": "IPv4", 00:24:50.538 "traddr": "10.0.0.2", 00:24:50.538 "trsvcid": "4420" 00:24:50.538 }, 00:24:50.538 "peer_address": { 00:24:50.538 "trtype": "TCP", 00:24:50.538 "adrfam": "IPv4", 00:24:50.538 "traddr": "10.0.0.1", 00:24:50.538 "trsvcid": "46226" 00:24:50.538 }, 00:24:50.538 "auth": { 00:24:50.538 "state": "completed", 00:24:50.538 "digest": "sha256", 00:24:50.538 "dhgroup": "ffdhe6144" 00:24:50.538 } 00:24:50.538 } 00:24:50.538 ]' 00:24:50.538 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:50.538 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:50.538 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:50.538 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:50.538 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:50.538 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:50.538 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:50.538 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:50.820 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:YmE3ODQyZDcwZGI5N2RmOTUyNTRkMDM5NzBlMmU0ZTUwMTg4MGFkZWQ4NzMzYjc0ODUyY2UxZTI5NmRhM2ZhM3KzMj0=: 00:24:51.418 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:51.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:51.418 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:51.418 09:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.418 09:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.418 09:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.418 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:24:51.418 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:51.418 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:51.418 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:51.700 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:24:51.701 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:51.701 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:51.701 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:51.701 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:51.701 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:24:51.701 09:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.701 09:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.701 09:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.701 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:51.701 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:52.304 00:24:52.304 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:52.304 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:52.304 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:52.304 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.304 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:52.304 09:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.304 09:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.304 09:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.304 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:52.304 { 00:24:52.304 "cntlid": 41, 00:24:52.304 "qid": 0, 00:24:52.304 "state": "enabled", 00:24:52.304 "listen_address": { 00:24:52.304 "trtype": "TCP", 00:24:52.304 "adrfam": "IPv4", 00:24:52.304 "traddr": "10.0.0.2", 00:24:52.304 "trsvcid": "4420" 00:24:52.304 }, 00:24:52.304 "peer_address": { 00:24:52.304 "trtype": "TCP", 00:24:52.304 "adrfam": "IPv4", 00:24:52.304 "traddr": "10.0.0.1", 00:24:52.304 "trsvcid": "42632" 00:24:52.304 }, 00:24:52.304 "auth": { 00:24:52.304 "state": "completed", 00:24:52.304 "digest": "sha256", 00:24:52.304 "dhgroup": "ffdhe8192" 00:24:52.304 } 00:24:52.304 } 00:24:52.304 ]' 00:24:52.304 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:52.607 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:52.607 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:52.607 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:52.607 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:52.607 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:52.607 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:52.607 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:52.608 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:24:53.575 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:53.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:53.575 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:53.575 09:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.575 09:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.575 09:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.575 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:53.575 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:53.575 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:53.575 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:24:53.575 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:53.575 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:53.575 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:53.575 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:53.575 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:24:53.575 09:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.575 09:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.575 09:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.575 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:53.575 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:54.185 00:24:54.185 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:54.185 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:54.185 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:54.185 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.185 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:54.185 09:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.185 09:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:54.185 09:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.185 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:54.185 { 00:24:54.185 "cntlid": 43, 00:24:54.185 "qid": 0, 00:24:54.185 "state": "enabled", 00:24:54.185 "listen_address": { 00:24:54.185 "trtype": "TCP", 00:24:54.185 "adrfam": "IPv4", 00:24:54.185 "traddr": "10.0.0.2", 00:24:54.185 "trsvcid": "4420" 00:24:54.185 }, 00:24:54.185 "peer_address": { 00:24:54.185 "trtype": "TCP", 00:24:54.185 "adrfam": "IPv4", 00:24:54.185 "traddr": "10.0.0.1", 00:24:54.185 "trsvcid": "42660" 00:24:54.185 }, 00:24:54.185 "auth": { 00:24:54.185 "state": "completed", 00:24:54.185 "digest": "sha256", 00:24:54.185 "dhgroup": "ffdhe8192" 00:24:54.185 } 00:24:54.185 } 00:24:54.185 ]' 00:24:54.185 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:54.507 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:54.507 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:54.507 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:54.507 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:54.507 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:54.507 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:54.507 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:54.507 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NGZiODkzOTIyN2Y0NzEwNDU3ZDM1MzFlYzIyNDdlM2YaZLfC: 00:24:55.111 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:55.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:55.111 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:55.111 09:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.111 09:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.111 09:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.111 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:55.111 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:55.111 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:55.385 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:24:55.385 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:55.385 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:55.385 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:55.385 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:55.385 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 00:24:55.385 09:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.385 09:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.385 09:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.385 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:55.385 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:55.993 00:24:55.993 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:55.993 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:55.993 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:55.993 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.993 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:55.993 09:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.993 09:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.993 09:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.993 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:55.993 { 00:24:55.993 "cntlid": 45, 00:24:55.993 "qid": 0, 00:24:55.993 "state": "enabled", 00:24:55.993 "listen_address": { 00:24:55.993 "trtype": "TCP", 00:24:55.993 "adrfam": "IPv4", 00:24:55.993 "traddr": "10.0.0.2", 00:24:55.993 "trsvcid": "4420" 00:24:55.993 }, 00:24:55.993 "peer_address": { 00:24:55.994 "trtype": "TCP", 00:24:55.994 "adrfam": "IPv4", 00:24:55.994 "traddr": "10.0.0.1", 00:24:55.994 "trsvcid": "42680" 00:24:55.994 }, 00:24:55.994 "auth": { 00:24:55.994 "state": "completed", 00:24:55.994 "digest": "sha256", 00:24:55.994 "dhgroup": "ffdhe8192" 00:24:55.994 } 00:24:55.994 } 00:24:55.994 ]' 00:24:55.994 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:56.270 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:56.270 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:56.270 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:56.270 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:56.270 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:56.270 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:56.270 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:56.270 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:NzQyYTM0ZmI5NzI4Nzg4ZjgzZjIxOWE1N2JhYjk5NTgzYzhkZTRlMWY0MTVjODIwkAOM9w==: 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:57.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:57.334 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:57.936 00:24:57.936 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:57.936 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:57.936 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:57.936 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.936 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:57.936 09:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.936 09:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.936 09:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.936 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:57.936 { 00:24:57.936 "cntlid": 47, 00:24:57.936 "qid": 0, 00:24:57.936 "state": "enabled", 00:24:57.936 "listen_address": { 00:24:57.936 "trtype": "TCP", 00:24:57.936 "adrfam": "IPv4", 00:24:57.936 "traddr": "10.0.0.2", 00:24:57.936 "trsvcid": "4420" 00:24:57.936 }, 00:24:57.936 "peer_address": { 00:24:57.936 "trtype": "TCP", 00:24:57.936 "adrfam": "IPv4", 00:24:57.936 "traddr": "10.0.0.1", 00:24:57.936 "trsvcid": "42706" 00:24:57.936 }, 00:24:57.936 "auth": { 00:24:57.936 "state": "completed", 00:24:57.936 "digest": "sha256", 00:24:57.936 "dhgroup": "ffdhe8192" 00:24:57.936 } 00:24:57.936 } 00:24:57.936 ]' 00:24:57.936 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:58.240 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:24:58.240 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:58.240 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:58.240 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:58.240 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:58.240 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:58.240 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:58.240 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:YmE3ODQyZDcwZGI5N2RmOTUyNTRkMDM5NzBlMmU0ZTUwMTg4MGFkZWQ4NzMzYjc0ODUyY2UxZTI5NmRhM2ZhM3KzMj0=: 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:59.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:59.265 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:59.562 00:24:59.562 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:59.562 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:59.562 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:59.562 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.562 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:59.562 09:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.563 09:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.563 09:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.563 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:59.563 { 00:24:59.563 "cntlid": 49, 00:24:59.563 "qid": 0, 00:24:59.563 "state": "enabled", 00:24:59.563 "listen_address": { 00:24:59.563 "trtype": "TCP", 00:24:59.563 "adrfam": "IPv4", 00:24:59.563 "traddr": "10.0.0.2", 00:24:59.563 "trsvcid": "4420" 00:24:59.563 }, 00:24:59.563 "peer_address": { 00:24:59.563 "trtype": "TCP", 00:24:59.563 "adrfam": "IPv4", 00:24:59.563 "traddr": "10.0.0.1", 00:24:59.563 "trsvcid": "42726" 00:24:59.563 }, 00:24:59.563 "auth": { 00:24:59.563 "state": "completed", 00:24:59.563 "digest": "sha384", 00:24:59.563 "dhgroup": "null" 00:24:59.563 } 00:24:59.563 } 00:24:59.563 ]' 00:24:59.563 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:59.837 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:59.837 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:59.837 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:24:59.837 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:59.837 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:59.837 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:59.837 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:59.837 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:00.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:00.780 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:01.041 00:25:01.041 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:01.041 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:01.041 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:01.301 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.301 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:01.301 09:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.301 09:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:01.301 09:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.301 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:01.301 { 00:25:01.301 "cntlid": 51, 00:25:01.301 "qid": 0, 00:25:01.301 "state": "enabled", 00:25:01.301 "listen_address": { 00:25:01.301 "trtype": "TCP", 00:25:01.301 "adrfam": "IPv4", 00:25:01.301 "traddr": "10.0.0.2", 00:25:01.301 "trsvcid": "4420" 00:25:01.301 }, 00:25:01.301 "peer_address": { 00:25:01.301 "trtype": "TCP", 00:25:01.301 "adrfam": "IPv4", 00:25:01.301 "traddr": "10.0.0.1", 00:25:01.301 "trsvcid": "42752" 00:25:01.301 }, 00:25:01.301 "auth": { 00:25:01.301 "state": "completed", 00:25:01.301 "digest": "sha384", 00:25:01.301 "dhgroup": "null" 00:25:01.301 } 00:25:01.301 } 00:25:01.301 ]' 00:25:01.301 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:01.301 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:01.301 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:01.301 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:25:01.301 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:01.301 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:01.301 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:01.301 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:01.562 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NGZiODkzOTIyN2Y0NzEwNDU3ZDM1MzFlYzIyNDdlM2YaZLfC: 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:02.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:02.505 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:02.767 00:25:02.767 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:02.767 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:02.767 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:02.767 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.767 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:02.767 09:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.767 09:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:02.767 09:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.767 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:02.767 { 00:25:02.767 "cntlid": 53, 00:25:02.767 "qid": 0, 00:25:02.767 "state": "enabled", 00:25:02.767 "listen_address": { 00:25:02.767 "trtype": "TCP", 00:25:02.767 "adrfam": "IPv4", 00:25:02.767 "traddr": "10.0.0.2", 00:25:02.767 "trsvcid": "4420" 00:25:02.767 }, 00:25:02.767 "peer_address": { 00:25:02.767 "trtype": "TCP", 00:25:02.767 "adrfam": "IPv4", 00:25:02.767 "traddr": "10.0.0.1", 00:25:02.767 "trsvcid": "47632" 00:25:02.767 }, 00:25:02.767 "auth": { 00:25:02.767 "state": "completed", 00:25:02.767 "digest": "sha384", 00:25:02.767 "dhgroup": "null" 00:25:02.767 } 00:25:02.767 } 00:25:02.767 ]' 00:25:02.767 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:03.027 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:03.027 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:03.027 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:25:03.027 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:03.027 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:03.027 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:03.027 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:03.287 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:NzQyYTM0ZmI5NzI4Nzg4ZjgzZjIxOWE1N2JhYjk5NTgzYzhkZTRlMWY0MTVjODIwkAOM9w==: 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:03.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:03.859 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:04.120 00:25:04.120 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:04.120 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:04.120 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:04.381 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.381 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:04.381 09:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.381 09:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:04.381 09:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.381 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:04.381 { 00:25:04.381 "cntlid": 55, 00:25:04.381 "qid": 0, 00:25:04.381 "state": "enabled", 00:25:04.381 "listen_address": { 00:25:04.381 "trtype": "TCP", 00:25:04.381 "adrfam": "IPv4", 00:25:04.381 "traddr": "10.0.0.2", 00:25:04.381 "trsvcid": "4420" 00:25:04.381 }, 00:25:04.381 "peer_address": { 00:25:04.381 "trtype": "TCP", 00:25:04.381 "adrfam": "IPv4", 00:25:04.381 "traddr": "10.0.0.1", 00:25:04.381 "trsvcid": "47666" 00:25:04.381 }, 00:25:04.381 "auth": { 00:25:04.381 "state": "completed", 00:25:04.381 "digest": "sha384", 00:25:04.381 "dhgroup": "null" 00:25:04.381 } 00:25:04.381 } 00:25:04.381 ]' 00:25:04.381 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:04.381 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:04.381 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:04.381 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:25:04.381 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:04.642 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:04.642 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:04.642 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:04.642 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:YmE3ODQyZDcwZGI5N2RmOTUyNTRkMDM5NzBlMmU0ZTUwMTg4MGFkZWQ4NzMzYjc0ODUyY2UxZTI5NmRhM2ZhM3KzMj0=: 00:25:05.585 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:05.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:05.585 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:05.585 09:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.585 09:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:05.585 09:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.585 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:25:05.585 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:05.585 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:05.585 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:05.585 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:25:05.585 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:05.585 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:05.585 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:05.585 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:05.585 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:25:05.585 09:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.585 09:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:05.585 09:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.585 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:05.585 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:05.847 00:25:05.847 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:05.847 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:05.847 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:06.108 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.108 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:06.108 09:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.108 09:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.108 09:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.108 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:06.108 { 00:25:06.108 "cntlid": 57, 00:25:06.108 "qid": 0, 00:25:06.108 "state": "enabled", 00:25:06.108 "listen_address": { 00:25:06.108 "trtype": "TCP", 00:25:06.108 "adrfam": "IPv4", 00:25:06.108 "traddr": "10.0.0.2", 00:25:06.108 "trsvcid": "4420" 00:25:06.108 }, 00:25:06.108 "peer_address": { 00:25:06.108 "trtype": "TCP", 00:25:06.108 "adrfam": "IPv4", 00:25:06.108 "traddr": "10.0.0.1", 00:25:06.108 "trsvcid": "47698" 00:25:06.108 }, 00:25:06.108 "auth": { 00:25:06.108 "state": "completed", 00:25:06.108 "digest": "sha384", 00:25:06.108 "dhgroup": "ffdhe2048" 00:25:06.108 } 00:25:06.108 } 00:25:06.108 ]' 00:25:06.108 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:06.108 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:06.108 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:06.108 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:06.108 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:06.108 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:06.108 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:06.108 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:06.369 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:25:06.940 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:07.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:07.201 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:07.201 09:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.201 09:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:07.201 09:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.201 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:07.201 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:07.201 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:07.201 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:25:07.201 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:07.201 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:07.201 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:07.201 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:07.201 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:25:07.201 09:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.201 09:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:07.201 09:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.201 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:07.201 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:07.463 00:25:07.463 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:07.463 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:07.463 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:07.725 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.725 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:07.725 09:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.725 09:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:07.725 09:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.725 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:07.725 { 00:25:07.725 "cntlid": 59, 00:25:07.725 "qid": 0, 00:25:07.725 "state": "enabled", 00:25:07.725 "listen_address": { 00:25:07.725 "trtype": "TCP", 00:25:07.725 "adrfam": "IPv4", 00:25:07.725 "traddr": "10.0.0.2", 00:25:07.725 "trsvcid": "4420" 00:25:07.725 }, 00:25:07.725 "peer_address": { 00:25:07.725 "trtype": "TCP", 00:25:07.725 "adrfam": "IPv4", 00:25:07.725 "traddr": "10.0.0.1", 00:25:07.725 "trsvcid": "47714" 00:25:07.725 }, 00:25:07.725 "auth": { 00:25:07.725 "state": "completed", 00:25:07.725 "digest": "sha384", 00:25:07.725 "dhgroup": "ffdhe2048" 00:25:07.725 } 00:25:07.725 } 00:25:07.725 ]' 00:25:07.725 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:07.725 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:07.725 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:07.725 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:07.725 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:07.725 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:07.725 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:07.725 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:07.985 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NGZiODkzOTIyN2Y0NzEwNDU3ZDM1MzFlYzIyNDdlM2YaZLfC: 00:25:08.557 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:08.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:08.557 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:08.557 09:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.557 09:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:08.557 09:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.557 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:08.557 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:08.557 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:08.818 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:25:08.818 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:08.818 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:08.818 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:08.818 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:08.818 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 00:25:08.818 09:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.818 09:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:08.818 09:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.818 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:08.818 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:09.078 00:25:09.078 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:09.078 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:09.078 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:09.078 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.078 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:09.078 09:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.078 09:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:09.339 09:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.339 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:09.339 { 00:25:09.339 "cntlid": 61, 00:25:09.339 "qid": 0, 00:25:09.339 "state": "enabled", 00:25:09.339 "listen_address": { 00:25:09.339 "trtype": "TCP", 00:25:09.339 "adrfam": "IPv4", 00:25:09.339 "traddr": "10.0.0.2", 00:25:09.339 "trsvcid": "4420" 00:25:09.339 }, 00:25:09.339 "peer_address": { 00:25:09.339 "trtype": "TCP", 00:25:09.339 "adrfam": "IPv4", 00:25:09.339 "traddr": "10.0.0.1", 00:25:09.339 "trsvcid": "47750" 00:25:09.339 }, 00:25:09.339 "auth": { 00:25:09.339 "state": "completed", 00:25:09.339 "digest": "sha384", 00:25:09.339 "dhgroup": "ffdhe2048" 00:25:09.339 } 00:25:09.339 } 00:25:09.339 ]' 00:25:09.339 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:09.339 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:09.339 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:09.339 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:09.339 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:09.339 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:09.339 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:09.339 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:09.600 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:NzQyYTM0ZmI5NzI4Nzg4ZjgzZjIxOWE1N2JhYjk5NTgzYzhkZTRlMWY0MTVjODIwkAOM9w==: 00:25:10.170 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:10.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:10.170 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:10.170 09:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.170 09:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:10.170 09:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.170 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:10.170 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:10.170 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:10.431 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:25:10.431 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:10.431 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:10.431 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:10.431 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:10.431 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:25:10.431 09:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.431 09:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:10.431 09:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.431 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:10.431 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:10.691 00:25:10.691 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:10.691 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:10.692 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:10.953 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.953 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:10.953 09:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.953 09:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:10.953 09:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.953 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:10.953 { 00:25:10.953 "cntlid": 63, 00:25:10.953 "qid": 0, 00:25:10.953 "state": "enabled", 00:25:10.953 "listen_address": { 00:25:10.953 "trtype": "TCP", 00:25:10.953 "adrfam": "IPv4", 00:25:10.953 "traddr": "10.0.0.2", 00:25:10.953 "trsvcid": "4420" 00:25:10.953 }, 00:25:10.953 "peer_address": { 00:25:10.953 "trtype": "TCP", 00:25:10.953 "adrfam": "IPv4", 00:25:10.953 "traddr": "10.0.0.1", 00:25:10.953 "trsvcid": "47778" 00:25:10.953 }, 00:25:10.953 "auth": { 00:25:10.953 "state": "completed", 00:25:10.953 "digest": "sha384", 00:25:10.953 "dhgroup": "ffdhe2048" 00:25:10.953 } 00:25:10.953 } 00:25:10.953 ]' 00:25:10.953 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:10.953 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:10.953 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:10.953 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:10.953 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:10.953 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:10.953 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:10.953 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:11.213 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:YmE3ODQyZDcwZGI5N2RmOTUyNTRkMDM5NzBlMmU0ZTUwMTg4MGFkZWQ4NzMzYjc0ODUyY2UxZTI5NmRhM2ZhM3KzMj0=: 00:25:11.785 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:11.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:11.785 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:11.785 09:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.785 09:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:11.785 09:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.785 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:25:11.785 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:11.785 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:11.785 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:12.045 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:25:12.045 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:12.045 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:12.045 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:12.045 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:12.045 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:25:12.046 09:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.046 09:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:12.046 09:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.046 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:12.046 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:12.307 00:25:12.307 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:12.307 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:12.307 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:12.307 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.307 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:12.307 09:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.307 09:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:12.569 09:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.569 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:12.569 { 00:25:12.569 "cntlid": 65, 00:25:12.569 "qid": 0, 00:25:12.569 "state": "enabled", 00:25:12.569 "listen_address": { 00:25:12.569 "trtype": "TCP", 00:25:12.569 "adrfam": "IPv4", 00:25:12.569 "traddr": "10.0.0.2", 00:25:12.569 "trsvcid": "4420" 00:25:12.569 }, 00:25:12.569 "peer_address": { 00:25:12.569 "trtype": "TCP", 00:25:12.569 "adrfam": "IPv4", 00:25:12.569 "traddr": "10.0.0.1", 00:25:12.569 "trsvcid": "54588" 00:25:12.569 }, 00:25:12.569 "auth": { 00:25:12.569 "state": "completed", 00:25:12.569 "digest": "sha384", 00:25:12.569 "dhgroup": "ffdhe3072" 00:25:12.569 } 00:25:12.569 } 00:25:12.569 ]' 00:25:12.569 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:12.569 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:12.569 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:12.569 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:12.569 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:12.569 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:12.569 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:12.569 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:12.829 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:25:13.398 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:13.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:13.398 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:13.398 09:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.398 09:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:13.398 09:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.398 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:13.398 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:13.398 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:13.658 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:25:13.658 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:13.658 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:13.658 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:13.658 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:13.660 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:25:13.660 09:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.660 09:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:13.660 09:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.660 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:13.660 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:13.920 00:25:13.920 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:13.920 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:13.920 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:14.180 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.180 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:14.180 09:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.180 09:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:14.180 09:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.180 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:14.180 { 00:25:14.180 "cntlid": 67, 00:25:14.180 "qid": 0, 00:25:14.180 "state": "enabled", 00:25:14.180 "listen_address": { 00:25:14.180 "trtype": "TCP", 00:25:14.180 "adrfam": "IPv4", 00:25:14.180 "traddr": "10.0.0.2", 00:25:14.180 "trsvcid": "4420" 00:25:14.180 }, 00:25:14.180 "peer_address": { 00:25:14.180 "trtype": "TCP", 00:25:14.180 "adrfam": "IPv4", 00:25:14.180 "traddr": "10.0.0.1", 00:25:14.180 "trsvcid": "54626" 00:25:14.180 }, 00:25:14.180 "auth": { 00:25:14.180 "state": "completed", 00:25:14.180 "digest": "sha384", 00:25:14.180 "dhgroup": "ffdhe3072" 00:25:14.180 } 00:25:14.180 } 00:25:14.180 ]' 00:25:14.180 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:14.180 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:14.180 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:14.180 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:14.180 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:14.180 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:14.180 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:14.180 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:14.442 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NGZiODkzOTIyN2Y0NzEwNDU3ZDM1MzFlYzIyNDdlM2YaZLfC: 00:25:15.016 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:15.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:15.323 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:15.323 09:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.323 09:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:15.323 09:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.323 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:15.323 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:15.323 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:15.323 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:25:15.323 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:15.323 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:15.323 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:15.323 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:15.323 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 00:25:15.323 09:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.323 09:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:15.323 09:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.323 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:15.324 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:15.585 00:25:15.585 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:15.585 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:15.585 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:15.846 09:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.846 09:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:15.846 09:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.846 09:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:15.846 09:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.846 09:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:15.846 { 00:25:15.846 "cntlid": 69, 00:25:15.846 "qid": 0, 00:25:15.846 "state": "enabled", 00:25:15.846 "listen_address": { 00:25:15.846 "trtype": "TCP", 00:25:15.846 "adrfam": "IPv4", 00:25:15.846 "traddr": "10.0.0.2", 00:25:15.846 "trsvcid": "4420" 00:25:15.846 }, 00:25:15.846 "peer_address": { 00:25:15.846 "trtype": "TCP", 00:25:15.846 "adrfam": "IPv4", 00:25:15.846 "traddr": "10.0.0.1", 00:25:15.846 "trsvcid": "54654" 00:25:15.846 }, 00:25:15.846 "auth": { 00:25:15.846 "state": "completed", 00:25:15.846 "digest": "sha384", 00:25:15.846 "dhgroup": "ffdhe3072" 00:25:15.846 } 00:25:15.846 } 00:25:15.846 ]' 00:25:15.846 09:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:15.846 09:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:15.846 09:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:15.846 09:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:15.846 09:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:15.846 09:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:15.846 09:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:15.846 09:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:16.106 09:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:NzQyYTM0ZmI5NzI4Nzg4ZjgzZjIxOWE1N2JhYjk5NTgzYzhkZTRlMWY0MTVjODIwkAOM9w==: 00:25:16.675 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:16.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:16.675 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:16.675 09:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.675 09:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:16.675 09:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.675 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:16.675 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:16.675 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:16.935 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:25:16.935 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:16.935 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:16.935 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:16.935 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:16.935 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:25:16.935 09:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.935 09:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:16.935 09:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.935 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:16.935 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:17.195 00:25:17.195 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:17.195 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:17.195 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:17.455 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.455 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:17.455 09:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.455 09:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:17.455 09:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.455 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:17.455 { 00:25:17.455 "cntlid": 71, 00:25:17.455 "qid": 0, 00:25:17.455 "state": "enabled", 00:25:17.455 "listen_address": { 00:25:17.455 "trtype": "TCP", 00:25:17.455 "adrfam": "IPv4", 00:25:17.455 "traddr": "10.0.0.2", 00:25:17.455 "trsvcid": "4420" 00:25:17.455 }, 00:25:17.455 "peer_address": { 00:25:17.455 "trtype": "TCP", 00:25:17.455 "adrfam": "IPv4", 00:25:17.455 "traddr": "10.0.0.1", 00:25:17.455 "trsvcid": "54686" 00:25:17.455 }, 00:25:17.455 "auth": { 00:25:17.455 "state": "completed", 00:25:17.455 "digest": "sha384", 00:25:17.455 "dhgroup": "ffdhe3072" 00:25:17.455 } 00:25:17.455 } 00:25:17.455 ]' 00:25:17.455 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:17.455 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:17.455 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:17.455 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:17.455 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:17.455 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:17.455 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:17.455 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:17.715 09:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:YmE3ODQyZDcwZGI5N2RmOTUyNTRkMDM5NzBlMmU0ZTUwMTg4MGFkZWQ4NzMzYjc0ODUyY2UxZTI5NmRhM2ZhM3KzMj0=: 00:25:18.653 09:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:18.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:18.654 09:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:18.654 09:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.654 09:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:18.654 09:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.654 09:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:25:18.654 09:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:18.654 09:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:18.654 09:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:18.654 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:25:18.654 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:18.654 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:18.654 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:18.654 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:18.654 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:25:18.654 09:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.654 09:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:18.654 09:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.654 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:18.654 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:18.913 00:25:18.913 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:18.913 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:18.913 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:18.913 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.913 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:18.913 09:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.174 09:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:19.174 09:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.174 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:19.174 { 00:25:19.174 "cntlid": 73, 00:25:19.174 "qid": 0, 00:25:19.174 "state": "enabled", 00:25:19.174 "listen_address": { 00:25:19.174 "trtype": "TCP", 00:25:19.174 "adrfam": "IPv4", 00:25:19.174 "traddr": "10.0.0.2", 00:25:19.174 "trsvcid": "4420" 00:25:19.174 }, 00:25:19.174 "peer_address": { 00:25:19.174 "trtype": "TCP", 00:25:19.174 "adrfam": "IPv4", 00:25:19.174 "traddr": "10.0.0.1", 00:25:19.174 "trsvcid": "54730" 00:25:19.174 }, 00:25:19.174 "auth": { 00:25:19.174 "state": "completed", 00:25:19.174 "digest": "sha384", 00:25:19.174 "dhgroup": "ffdhe4096" 00:25:19.174 } 00:25:19.174 } 00:25:19.174 ]' 00:25:19.174 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:19.174 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:19.174 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:19.174 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:19.174 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:19.174 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:19.174 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:19.174 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:19.434 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:25:20.005 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:20.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:20.005 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:20.005 09:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.005 09:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.005 09:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.005 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:20.005 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:20.005 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:20.265 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:25:20.265 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:20.265 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:20.265 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:20.265 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:20.265 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:25:20.265 09:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.265 09:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.265 09:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.265 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:20.265 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:20.526 00:25:20.526 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:20.526 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:20.526 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:20.786 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.786 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:20.786 09:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.786 09:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.786 09:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.786 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:20.786 { 00:25:20.786 "cntlid": 75, 00:25:20.786 "qid": 0, 00:25:20.786 "state": "enabled", 00:25:20.786 "listen_address": { 00:25:20.786 "trtype": "TCP", 00:25:20.786 "adrfam": "IPv4", 00:25:20.786 "traddr": "10.0.0.2", 00:25:20.786 "trsvcid": "4420" 00:25:20.787 }, 00:25:20.787 "peer_address": { 00:25:20.787 "trtype": "TCP", 00:25:20.787 "adrfam": "IPv4", 00:25:20.787 "traddr": "10.0.0.1", 00:25:20.787 "trsvcid": "54744" 00:25:20.787 }, 00:25:20.787 "auth": { 00:25:20.787 "state": "completed", 00:25:20.787 "digest": "sha384", 00:25:20.787 "dhgroup": "ffdhe4096" 00:25:20.787 } 00:25:20.787 } 00:25:20.787 ]' 00:25:20.787 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:20.787 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:20.787 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:20.787 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:20.787 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:20.787 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:20.787 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:20.787 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:21.047 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NGZiODkzOTIyN2Y0NzEwNDU3ZDM1MzFlYzIyNDdlM2YaZLfC: 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:21.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:21.988 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:22.249 00:25:22.249 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:22.249 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:22.249 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:22.249 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.249 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:22.249 09:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.249 09:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:22.509 09:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.509 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:22.509 { 00:25:22.509 "cntlid": 77, 00:25:22.509 "qid": 0, 00:25:22.509 "state": "enabled", 00:25:22.509 "listen_address": { 00:25:22.509 "trtype": "TCP", 00:25:22.509 "adrfam": "IPv4", 00:25:22.509 "traddr": "10.0.0.2", 00:25:22.509 "trsvcid": "4420" 00:25:22.509 }, 00:25:22.509 "peer_address": { 00:25:22.509 "trtype": "TCP", 00:25:22.509 "adrfam": "IPv4", 00:25:22.509 "traddr": "10.0.0.1", 00:25:22.509 "trsvcid": "57408" 00:25:22.509 }, 00:25:22.509 "auth": { 00:25:22.509 "state": "completed", 00:25:22.509 "digest": "sha384", 00:25:22.509 "dhgroup": "ffdhe4096" 00:25:22.509 } 00:25:22.509 } 00:25:22.509 ]' 00:25:22.509 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:22.509 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:22.509 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:22.509 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:22.509 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:22.509 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:22.509 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:22.509 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:22.769 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:NzQyYTM0ZmI5NzI4Nzg4ZjgzZjIxOWE1N2JhYjk5NTgzYzhkZTRlMWY0MTVjODIwkAOM9w==: 00:25:23.342 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:23.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:23.342 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:23.342 09:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.342 09:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:23.342 09:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.342 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:23.342 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:23.342 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:23.603 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:25:23.603 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:23.603 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:23.603 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:23.603 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:23.603 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:25:23.603 09:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.603 09:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:23.603 09:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.603 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:23.603 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:23.862 00:25:23.862 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:23.862 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:23.862 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:24.120 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.120 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:24.120 09:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.120 09:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:24.120 09:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.120 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:24.120 { 00:25:24.120 "cntlid": 79, 00:25:24.120 "qid": 0, 00:25:24.120 "state": "enabled", 00:25:24.120 "listen_address": { 00:25:24.120 "trtype": "TCP", 00:25:24.120 "adrfam": "IPv4", 00:25:24.120 "traddr": "10.0.0.2", 00:25:24.120 "trsvcid": "4420" 00:25:24.120 }, 00:25:24.120 "peer_address": { 00:25:24.120 "trtype": "TCP", 00:25:24.120 "adrfam": "IPv4", 00:25:24.120 "traddr": "10.0.0.1", 00:25:24.120 "trsvcid": "57428" 00:25:24.120 }, 00:25:24.120 "auth": { 00:25:24.120 "state": "completed", 00:25:24.120 "digest": "sha384", 00:25:24.120 "dhgroup": "ffdhe4096" 00:25:24.120 } 00:25:24.120 } 00:25:24.120 ]' 00:25:24.120 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:24.120 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:24.120 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:24.120 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:24.120 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:24.120 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:24.120 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:24.120 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:24.380 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:YmE3ODQyZDcwZGI5N2RmOTUyNTRkMDM5NzBlMmU0ZTUwMTg4MGFkZWQ4NzMzYjc0ODUyY2UxZTI5NmRhM2ZhM3KzMj0=: 00:25:24.951 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:25.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:25.212 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:25.472 00:25:25.732 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:25.732 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:25.732 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:25.732 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.732 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:25.732 09:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.732 09:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:25.732 09:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.732 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:25.732 { 00:25:25.732 "cntlid": 81, 00:25:25.732 "qid": 0, 00:25:25.732 "state": "enabled", 00:25:25.732 "listen_address": { 00:25:25.732 "trtype": "TCP", 00:25:25.732 "adrfam": "IPv4", 00:25:25.732 "traddr": "10.0.0.2", 00:25:25.732 "trsvcid": "4420" 00:25:25.732 }, 00:25:25.732 "peer_address": { 00:25:25.732 "trtype": "TCP", 00:25:25.732 "adrfam": "IPv4", 00:25:25.732 "traddr": "10.0.0.1", 00:25:25.732 "trsvcid": "57452" 00:25:25.732 }, 00:25:25.732 "auth": { 00:25:25.732 "state": "completed", 00:25:25.732 "digest": "sha384", 00:25:25.732 "dhgroup": "ffdhe6144" 00:25:25.732 } 00:25:25.732 } 00:25:25.732 ]' 00:25:25.732 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:25.732 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:25.732 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:25.992 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:25.992 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:25.992 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:25.992 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:25.992 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:25.992 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:26.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:26.940 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:27.511 00:25:27.511 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:27.511 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:27.511 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:27.511 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.511 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:27.511 09:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.511 09:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:27.511 09:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.511 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:27.511 { 00:25:27.511 "cntlid": 83, 00:25:27.511 "qid": 0, 00:25:27.511 "state": "enabled", 00:25:27.511 "listen_address": { 00:25:27.511 "trtype": "TCP", 00:25:27.511 "adrfam": "IPv4", 00:25:27.511 "traddr": "10.0.0.2", 00:25:27.511 "trsvcid": "4420" 00:25:27.511 }, 00:25:27.511 "peer_address": { 00:25:27.511 "trtype": "TCP", 00:25:27.511 "adrfam": "IPv4", 00:25:27.511 "traddr": "10.0.0.1", 00:25:27.511 "trsvcid": "57486" 00:25:27.511 }, 00:25:27.511 "auth": { 00:25:27.511 "state": "completed", 00:25:27.511 "digest": "sha384", 00:25:27.511 "dhgroup": "ffdhe6144" 00:25:27.511 } 00:25:27.511 } 00:25:27.511 ]' 00:25:27.511 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:27.511 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:27.511 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:27.771 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:27.771 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:27.771 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:27.771 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:27.771 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:27.771 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NGZiODkzOTIyN2Y0NzEwNDU3ZDM1MzFlYzIyNDdlM2YaZLfC: 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:28.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:28.714 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:29.284 00:25:29.284 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:29.285 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:29.285 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:29.285 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.285 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:29.285 09:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.285 09:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:29.285 09:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.285 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:29.285 { 00:25:29.285 "cntlid": 85, 00:25:29.285 "qid": 0, 00:25:29.285 "state": "enabled", 00:25:29.285 "listen_address": { 00:25:29.285 "trtype": "TCP", 00:25:29.285 "adrfam": "IPv4", 00:25:29.285 "traddr": "10.0.0.2", 00:25:29.285 "trsvcid": "4420" 00:25:29.285 }, 00:25:29.285 "peer_address": { 00:25:29.285 "trtype": "TCP", 00:25:29.285 "adrfam": "IPv4", 00:25:29.285 "traddr": "10.0.0.1", 00:25:29.285 "trsvcid": "57508" 00:25:29.285 }, 00:25:29.285 "auth": { 00:25:29.285 "state": "completed", 00:25:29.285 "digest": "sha384", 00:25:29.285 "dhgroup": "ffdhe6144" 00:25:29.285 } 00:25:29.285 } 00:25:29.285 ]' 00:25:29.285 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:29.285 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:29.285 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:29.285 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:29.285 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:29.545 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:29.545 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:29.545 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:29.545 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:NzQyYTM0ZmI5NzI4Nzg4ZjgzZjIxOWE1N2JhYjk5NTgzYzhkZTRlMWY0MTVjODIwkAOM9w==: 00:25:30.484 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:30.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:30.485 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:30.485 09:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.485 09:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.485 09:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.485 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:30.485 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:30.485 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:30.485 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:25:30.485 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:30.485 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:30.485 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:30.485 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:30.485 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:25:30.485 09:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.485 09:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.485 09:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.485 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:30.485 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:30.746 00:25:31.007 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:31.007 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:31.007 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:31.007 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.007 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:31.007 09:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.007 09:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:31.007 09:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.007 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:31.007 { 00:25:31.007 "cntlid": 87, 00:25:31.007 "qid": 0, 00:25:31.007 "state": "enabled", 00:25:31.007 "listen_address": { 00:25:31.007 "trtype": "TCP", 00:25:31.007 "adrfam": "IPv4", 00:25:31.007 "traddr": "10.0.0.2", 00:25:31.007 "trsvcid": "4420" 00:25:31.007 }, 00:25:31.007 "peer_address": { 00:25:31.007 "trtype": "TCP", 00:25:31.007 "adrfam": "IPv4", 00:25:31.007 "traddr": "10.0.0.1", 00:25:31.007 "trsvcid": "57540" 00:25:31.007 }, 00:25:31.007 "auth": { 00:25:31.007 "state": "completed", 00:25:31.007 "digest": "sha384", 00:25:31.007 "dhgroup": "ffdhe6144" 00:25:31.007 } 00:25:31.007 } 00:25:31.007 ]' 00:25:31.007 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:31.007 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:31.007 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:31.267 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:31.267 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:31.267 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:31.267 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:31.267 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:31.267 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:YmE3ODQyZDcwZGI5N2RmOTUyNTRkMDM5NzBlMmU0ZTUwMTg4MGFkZWQ4NzMzYjc0ODUyY2UxZTI5NmRhM2ZhM3KzMj0=: 00:25:32.206 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:32.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:32.206 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:32.206 09:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.206 09:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.206 09:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.206 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.206 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:32.206 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:32.206 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:32.207 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:25:32.207 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:32.207 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:32.207 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:32.207 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:32.207 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:25:32.207 09:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.207 09:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.207 09:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.207 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:32.207 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:32.779 00:25:32.779 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:32.779 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:32.779 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:33.039 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.039 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:33.039 09:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.039 09:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:33.039 09:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.039 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:33.039 { 00:25:33.039 "cntlid": 89, 00:25:33.039 "qid": 0, 00:25:33.039 "state": "enabled", 00:25:33.039 "listen_address": { 00:25:33.040 "trtype": "TCP", 00:25:33.040 "adrfam": "IPv4", 00:25:33.040 "traddr": "10.0.0.2", 00:25:33.040 "trsvcid": "4420" 00:25:33.040 }, 00:25:33.040 "peer_address": { 00:25:33.040 "trtype": "TCP", 00:25:33.040 "adrfam": "IPv4", 00:25:33.040 "traddr": "10.0.0.1", 00:25:33.040 "trsvcid": "39400" 00:25:33.040 }, 00:25:33.040 "auth": { 00:25:33.040 "state": "completed", 00:25:33.040 "digest": "sha384", 00:25:33.040 "dhgroup": "ffdhe8192" 00:25:33.040 } 00:25:33.040 } 00:25:33.040 ]' 00:25:33.040 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:33.040 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:33.040 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:33.040 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:33.040 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:33.040 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:33.040 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:33.040 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:33.300 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:25:33.871 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:33.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:33.871 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:34.132 09:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.132 09:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:34.132 09:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.132 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:34.132 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:34.132 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:34.132 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:25:34.132 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:34.132 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:34.132 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:34.132 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:34.132 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:25:34.132 09:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.132 09:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:34.132 09:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.132 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:34.132 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:34.703 00:25:34.703 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:34.703 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:34.703 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:34.963 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.963 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:34.963 09:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.963 09:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:34.963 09:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.963 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:34.963 { 00:25:34.963 "cntlid": 91, 00:25:34.963 "qid": 0, 00:25:34.963 "state": "enabled", 00:25:34.963 "listen_address": { 00:25:34.963 "trtype": "TCP", 00:25:34.963 "adrfam": "IPv4", 00:25:34.963 "traddr": "10.0.0.2", 00:25:34.963 "trsvcid": "4420" 00:25:34.963 }, 00:25:34.963 "peer_address": { 00:25:34.963 "trtype": "TCP", 00:25:34.963 "adrfam": "IPv4", 00:25:34.963 "traddr": "10.0.0.1", 00:25:34.963 "trsvcid": "39430" 00:25:34.963 }, 00:25:34.963 "auth": { 00:25:34.963 "state": "completed", 00:25:34.963 "digest": "sha384", 00:25:34.964 "dhgroup": "ffdhe8192" 00:25:34.964 } 00:25:34.964 } 00:25:34.964 ]' 00:25:34.964 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:34.964 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:34.964 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:34.964 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:34.964 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:34.964 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:34.964 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:34.964 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:35.225 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NGZiODkzOTIyN2Y0NzEwNDU3ZDM1MzFlYzIyNDdlM2YaZLfC: 00:25:35.795 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:35.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:35.795 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:35.795 09:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.795 09:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:35.795 09:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.795 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:35.795 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:35.795 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:36.056 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:25:36.056 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:36.056 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:36.056 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:36.056 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:36.056 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 00:25:36.056 09:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.056 09:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:36.056 09:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.056 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:36.056 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:36.630 00:25:36.630 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:36.630 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:36.630 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:36.892 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.892 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:36.892 09:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.892 09:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:36.892 09:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.892 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:36.892 { 00:25:36.892 "cntlid": 93, 00:25:36.892 "qid": 0, 00:25:36.892 "state": "enabled", 00:25:36.892 "listen_address": { 00:25:36.892 "trtype": "TCP", 00:25:36.892 "adrfam": "IPv4", 00:25:36.892 "traddr": "10.0.0.2", 00:25:36.892 "trsvcid": "4420" 00:25:36.892 }, 00:25:36.892 "peer_address": { 00:25:36.892 "trtype": "TCP", 00:25:36.892 "adrfam": "IPv4", 00:25:36.892 "traddr": "10.0.0.1", 00:25:36.892 "trsvcid": "39464" 00:25:36.892 }, 00:25:36.892 "auth": { 00:25:36.892 "state": "completed", 00:25:36.892 "digest": "sha384", 00:25:36.892 "dhgroup": "ffdhe8192" 00:25:36.892 } 00:25:36.892 } 00:25:36.892 ]' 00:25:36.892 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:36.892 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:36.892 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:36.892 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:36.892 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:36.893 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:36.893 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:36.893 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:37.154 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:NzQyYTM0ZmI5NzI4Nzg4ZjgzZjIxOWE1N2JhYjk5NTgzYzhkZTRlMWY0MTVjODIwkAOM9w==: 00:25:37.726 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:37.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:37.726 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:37.726 09:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.726 09:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.987 09:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.987 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:37.987 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:37.987 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:37.987 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:25:37.987 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:37.987 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:25:37.987 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:37.987 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:37.987 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:25:37.987 09:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.987 09:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.987 09:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.987 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:37.987 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:38.558 00:25:38.558 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:38.558 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:38.558 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:38.818 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.818 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:38.818 09:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.818 09:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.818 09:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.818 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:38.818 { 00:25:38.818 "cntlid": 95, 00:25:38.818 "qid": 0, 00:25:38.818 "state": "enabled", 00:25:38.818 "listen_address": { 00:25:38.818 "trtype": "TCP", 00:25:38.818 "adrfam": "IPv4", 00:25:38.818 "traddr": "10.0.0.2", 00:25:38.818 "trsvcid": "4420" 00:25:38.818 }, 00:25:38.818 "peer_address": { 00:25:38.818 "trtype": "TCP", 00:25:38.819 "adrfam": "IPv4", 00:25:38.819 "traddr": "10.0.0.1", 00:25:38.819 "trsvcid": "39484" 00:25:38.819 }, 00:25:38.819 "auth": { 00:25:38.819 "state": "completed", 00:25:38.819 "digest": "sha384", 00:25:38.819 "dhgroup": "ffdhe8192" 00:25:38.819 } 00:25:38.819 } 00:25:38.819 ]' 00:25:38.819 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:38.819 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:25:38.819 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:38.819 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:38.819 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:38.819 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:38.819 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:38.819 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:39.079 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:YmE3ODQyZDcwZGI5N2RmOTUyNTRkMDM5NzBlMmU0ZTUwMTg4MGFkZWQ4NzMzYjc0ODUyY2UxZTI5NmRhM2ZhM3KzMj0=: 00:25:39.649 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:39.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:39.909 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:40.170 00:25:40.170 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:40.170 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:40.170 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:40.430 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.430 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:40.430 09:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.430 09:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:40.430 09:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.430 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:40.430 { 00:25:40.430 "cntlid": 97, 00:25:40.430 "qid": 0, 00:25:40.430 "state": "enabled", 00:25:40.430 "listen_address": { 00:25:40.430 "trtype": "TCP", 00:25:40.430 "adrfam": "IPv4", 00:25:40.430 "traddr": "10.0.0.2", 00:25:40.430 "trsvcid": "4420" 00:25:40.430 }, 00:25:40.430 "peer_address": { 00:25:40.430 "trtype": "TCP", 00:25:40.430 "adrfam": "IPv4", 00:25:40.430 "traddr": "10.0.0.1", 00:25:40.430 "trsvcid": "39522" 00:25:40.430 }, 00:25:40.430 "auth": { 00:25:40.430 "state": "completed", 00:25:40.430 "digest": "sha512", 00:25:40.430 "dhgroup": "null" 00:25:40.430 } 00:25:40.430 } 00:25:40.430 ]' 00:25:40.430 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:40.430 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:40.430 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:40.430 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:25:40.430 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:40.430 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:40.430 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:40.430 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:40.691 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:25:41.261 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:41.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:41.522 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:41.522 09:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.522 09:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:41.522 09:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.522 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:41.522 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:41.522 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:41.522 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:25:41.522 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:41.522 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:41.522 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:41.522 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:41.522 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:25:41.522 09:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.522 09:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:41.522 09:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.522 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:41.522 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:41.782 00:25:41.782 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:41.782 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:41.782 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:42.043 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.043 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:42.043 09:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.043 09:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:42.043 09:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.043 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:42.043 { 00:25:42.043 "cntlid": 99, 00:25:42.043 "qid": 0, 00:25:42.043 "state": "enabled", 00:25:42.043 "listen_address": { 00:25:42.043 "trtype": "TCP", 00:25:42.043 "adrfam": "IPv4", 00:25:42.043 "traddr": "10.0.0.2", 00:25:42.043 "trsvcid": "4420" 00:25:42.043 }, 00:25:42.043 "peer_address": { 00:25:42.043 "trtype": "TCP", 00:25:42.043 "adrfam": "IPv4", 00:25:42.043 "traddr": "10.0.0.1", 00:25:42.043 "trsvcid": "32938" 00:25:42.043 }, 00:25:42.043 "auth": { 00:25:42.043 "state": "completed", 00:25:42.043 "digest": "sha512", 00:25:42.043 "dhgroup": "null" 00:25:42.043 } 00:25:42.043 } 00:25:42.043 ]' 00:25:42.043 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:42.043 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:42.044 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:42.044 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:25:42.044 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:42.044 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:42.044 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:42.044 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:42.304 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NGZiODkzOTIyN2Y0NzEwNDU3ZDM1MzFlYzIyNDdlM2YaZLfC: 00:25:42.874 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:43.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:43.134 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:43.134 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.134 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.134 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.134 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:43.134 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:43.134 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:43.134 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:25:43.134 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:43.134 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:43.134 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:43.134 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:43.134 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 00:25:43.134 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.134 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.134 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.134 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:43.134 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:43.395 00:25:43.395 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:43.395 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:43.395 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:43.656 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.656 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:43.656 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.656 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:43.656 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.656 09:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:43.656 { 00:25:43.656 "cntlid": 101, 00:25:43.656 "qid": 0, 00:25:43.656 "state": "enabled", 00:25:43.656 "listen_address": { 00:25:43.656 "trtype": "TCP", 00:25:43.656 "adrfam": "IPv4", 00:25:43.656 "traddr": "10.0.0.2", 00:25:43.656 "trsvcid": "4420" 00:25:43.656 }, 00:25:43.656 "peer_address": { 00:25:43.656 "trtype": "TCP", 00:25:43.656 "adrfam": "IPv4", 00:25:43.656 "traddr": "10.0.0.1", 00:25:43.656 "trsvcid": "32970" 00:25:43.656 }, 00:25:43.656 "auth": { 00:25:43.656 "state": "completed", 00:25:43.656 "digest": "sha512", 00:25:43.656 "dhgroup": "null" 00:25:43.656 } 00:25:43.656 } 00:25:43.656 ]' 00:25:43.656 09:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:43.656 09:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:43.656 09:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:43.656 09:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:25:43.656 09:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:43.656 09:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:43.656 09:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:43.657 09:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:43.917 09:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:NzQyYTM0ZmI5NzI4Nzg4ZjgzZjIxOWE1N2JhYjk5NTgzYzhkZTRlMWY0MTVjODIwkAOM9w==: 00:25:44.487 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:44.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:44.487 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:44.487 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.487 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:44.487 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.487 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:44.487 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:44.488 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:25:44.748 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:25:44.748 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:44.748 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:44.748 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:25:44.748 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:44.748 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:25:44.748 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.748 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:44.748 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.748 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:44.748 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:45.008 00:25:45.008 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:45.008 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:45.008 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:45.268 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.268 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:45.268 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.268 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:45.268 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.268 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:45.268 { 00:25:45.268 "cntlid": 103, 00:25:45.268 "qid": 0, 00:25:45.268 "state": "enabled", 00:25:45.268 "listen_address": { 00:25:45.268 "trtype": "TCP", 00:25:45.268 "adrfam": "IPv4", 00:25:45.268 "traddr": "10.0.0.2", 00:25:45.268 "trsvcid": "4420" 00:25:45.268 }, 00:25:45.268 "peer_address": { 00:25:45.268 "trtype": "TCP", 00:25:45.268 "adrfam": "IPv4", 00:25:45.268 "traddr": "10.0.0.1", 00:25:45.268 "trsvcid": "32992" 00:25:45.268 }, 00:25:45.268 "auth": { 00:25:45.268 "state": "completed", 00:25:45.268 "digest": "sha512", 00:25:45.268 "dhgroup": "null" 00:25:45.268 } 00:25:45.268 } 00:25:45.268 ]' 00:25:45.268 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:45.268 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:45.268 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:45.268 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:25:45.268 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:45.268 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:45.268 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:45.269 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:45.529 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:YmE3ODQyZDcwZGI5N2RmOTUyNTRkMDM5NzBlMmU0ZTUwMTg4MGFkZWQ4NzMzYjc0ODUyY2UxZTI5NmRhM2ZhM3KzMj0=: 00:25:46.104 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:46.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:46.104 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:46.104 09:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.104 09:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:46.365 09:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.365 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:25:46.365 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:46.365 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:46.365 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:46.365 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:25:46.365 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:46.365 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:46.365 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:46.365 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:46.365 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:25:46.365 09:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.365 09:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:46.365 09:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.365 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:46.366 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:46.626 00:25:46.626 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:46.626 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:46.626 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:46.887 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.887 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:46.887 09:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.887 09:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:46.887 09:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.887 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:46.887 { 00:25:46.887 "cntlid": 105, 00:25:46.887 "qid": 0, 00:25:46.887 "state": "enabled", 00:25:46.887 "listen_address": { 00:25:46.887 "trtype": "TCP", 00:25:46.887 "adrfam": "IPv4", 00:25:46.887 "traddr": "10.0.0.2", 00:25:46.887 "trsvcid": "4420" 00:25:46.887 }, 00:25:46.887 "peer_address": { 00:25:46.887 "trtype": "TCP", 00:25:46.887 "adrfam": "IPv4", 00:25:46.887 "traddr": "10.0.0.1", 00:25:46.887 "trsvcid": "33018" 00:25:46.887 }, 00:25:46.887 "auth": { 00:25:46.887 "state": "completed", 00:25:46.887 "digest": "sha512", 00:25:46.887 "dhgroup": "ffdhe2048" 00:25:46.887 } 00:25:46.887 } 00:25:46.887 ]' 00:25:46.887 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:46.887 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:46.887 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:46.887 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:46.887 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:46.887 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:46.887 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:46.887 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:47.148 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:48.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:48.091 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:48.352 00:25:48.352 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:48.352 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:48.352 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:48.352 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.352 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:48.352 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.352 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:48.352 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.352 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:48.352 { 00:25:48.352 "cntlid": 107, 00:25:48.352 "qid": 0, 00:25:48.352 "state": "enabled", 00:25:48.352 "listen_address": { 00:25:48.352 "trtype": "TCP", 00:25:48.352 "adrfam": "IPv4", 00:25:48.352 "traddr": "10.0.0.2", 00:25:48.352 "trsvcid": "4420" 00:25:48.352 }, 00:25:48.352 "peer_address": { 00:25:48.352 "trtype": "TCP", 00:25:48.352 "adrfam": "IPv4", 00:25:48.352 "traddr": "10.0.0.1", 00:25:48.352 "trsvcid": "33036" 00:25:48.352 }, 00:25:48.352 "auth": { 00:25:48.352 "state": "completed", 00:25:48.352 "digest": "sha512", 00:25:48.352 "dhgroup": "ffdhe2048" 00:25:48.352 } 00:25:48.352 } 00:25:48.352 ]' 00:25:48.352 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:48.612 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:48.612 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:48.612 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:48.612 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:48.612 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:48.612 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:48.612 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:48.612 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NGZiODkzOTIyN2Y0NzEwNDU3ZDM1MzFlYzIyNDdlM2YaZLfC: 00:25:49.557 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:49.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:49.557 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:49.557 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.557 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:49.557 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.557 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:49.557 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:49.557 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:49.557 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:25:49.557 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:49.557 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:49.557 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:49.557 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:49.557 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 00:25:49.557 09:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.557 09:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:49.557 09:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.557 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:49.557 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:49.818 00:25:49.818 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:49.818 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:49.818 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:50.079 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.079 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:50.079 09:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.079 09:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:50.079 09:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.079 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:50.079 { 00:25:50.079 "cntlid": 109, 00:25:50.079 "qid": 0, 00:25:50.079 "state": "enabled", 00:25:50.079 "listen_address": { 00:25:50.079 "trtype": "TCP", 00:25:50.079 "adrfam": "IPv4", 00:25:50.079 "traddr": "10.0.0.2", 00:25:50.079 "trsvcid": "4420" 00:25:50.079 }, 00:25:50.079 "peer_address": { 00:25:50.079 "trtype": "TCP", 00:25:50.079 "adrfam": "IPv4", 00:25:50.079 "traddr": "10.0.0.1", 00:25:50.079 "trsvcid": "33068" 00:25:50.079 }, 00:25:50.079 "auth": { 00:25:50.079 "state": "completed", 00:25:50.079 "digest": "sha512", 00:25:50.079 "dhgroup": "ffdhe2048" 00:25:50.079 } 00:25:50.079 } 00:25:50.079 ]' 00:25:50.079 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:50.079 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:50.079 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:50.079 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:50.079 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:50.079 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:50.079 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:50.079 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:50.340 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:NzQyYTM0ZmI5NzI4Nzg4ZjgzZjIxOWE1N2JhYjk5NTgzYzhkZTRlMWY0MTVjODIwkAOM9w==: 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:51.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:51.284 09:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:51.544 00:25:51.544 09:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:51.544 09:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:51.544 09:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:51.544 09:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.544 09:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:51.544 09:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.544 09:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:51.803 09:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.803 09:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:51.803 { 00:25:51.803 "cntlid": 111, 00:25:51.803 "qid": 0, 00:25:51.803 "state": "enabled", 00:25:51.803 "listen_address": { 00:25:51.803 "trtype": "TCP", 00:25:51.803 "adrfam": "IPv4", 00:25:51.803 "traddr": "10.0.0.2", 00:25:51.803 "trsvcid": "4420" 00:25:51.803 }, 00:25:51.803 "peer_address": { 00:25:51.803 "trtype": "TCP", 00:25:51.803 "adrfam": "IPv4", 00:25:51.803 "traddr": "10.0.0.1", 00:25:51.803 "trsvcid": "33090" 00:25:51.803 }, 00:25:51.803 "auth": { 00:25:51.803 "state": "completed", 00:25:51.803 "digest": "sha512", 00:25:51.803 "dhgroup": "ffdhe2048" 00:25:51.803 } 00:25:51.803 } 00:25:51.803 ]' 00:25:51.803 09:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:51.803 09:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:51.803 09:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:51.803 09:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:25:51.803 09:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:51.803 09:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:51.803 09:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:51.803 09:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:52.205 09:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:YmE3ODQyZDcwZGI5N2RmOTUyNTRkMDM5NzBlMmU0ZTUwMTg4MGFkZWQ4NzMzYjc0ODUyY2UxZTI5NmRhM2ZhM3KzMj0=: 00:25:52.774 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:52.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:52.774 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:52.774 09:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.774 09:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:52.774 09:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.774 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:25:52.774 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:52.774 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:52.774 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:52.774 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:25:52.774 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:52.774 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:52.774 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:52.774 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:52.774 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:25:52.774 09:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.774 09:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:53.035 09:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.035 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:53.035 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:53.035 00:25:53.035 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:53.035 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:53.035 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:53.295 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.295 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:53.295 09:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.295 09:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:53.295 09:38:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.295 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:53.295 { 00:25:53.295 "cntlid": 113, 00:25:53.295 "qid": 0, 00:25:53.295 "state": "enabled", 00:25:53.295 "listen_address": { 00:25:53.295 "trtype": "TCP", 00:25:53.295 "adrfam": "IPv4", 00:25:53.295 "traddr": "10.0.0.2", 00:25:53.295 "trsvcid": "4420" 00:25:53.295 }, 00:25:53.295 "peer_address": { 00:25:53.295 "trtype": "TCP", 00:25:53.295 "adrfam": "IPv4", 00:25:53.295 "traddr": "10.0.0.1", 00:25:53.295 "trsvcid": "55670" 00:25:53.295 }, 00:25:53.295 "auth": { 00:25:53.295 "state": "completed", 00:25:53.295 "digest": "sha512", 00:25:53.295 "dhgroup": "ffdhe3072" 00:25:53.295 } 00:25:53.295 } 00:25:53.295 ]' 00:25:53.295 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:53.295 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:53.295 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:53.295 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:53.555 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:53.555 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:53.555 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:53.555 09:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:53.555 09:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:54.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:54.496 09:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:54.757 00:25:54.757 09:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:54.757 09:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:54.757 09:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:55.016 09:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.016 09:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:55.016 09:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.016 09:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.016 09:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.016 09:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:55.016 { 00:25:55.016 "cntlid": 115, 00:25:55.016 "qid": 0, 00:25:55.016 "state": "enabled", 00:25:55.016 "listen_address": { 00:25:55.016 "trtype": "TCP", 00:25:55.016 "adrfam": "IPv4", 00:25:55.016 "traddr": "10.0.0.2", 00:25:55.016 "trsvcid": "4420" 00:25:55.016 }, 00:25:55.016 "peer_address": { 00:25:55.016 "trtype": "TCP", 00:25:55.016 "adrfam": "IPv4", 00:25:55.016 "traddr": "10.0.0.1", 00:25:55.016 "trsvcid": "55690" 00:25:55.016 }, 00:25:55.016 "auth": { 00:25:55.016 "state": "completed", 00:25:55.016 "digest": "sha512", 00:25:55.016 "dhgroup": "ffdhe3072" 00:25:55.016 } 00:25:55.016 } 00:25:55.016 ]' 00:25:55.016 09:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:55.016 09:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:55.016 09:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:55.016 09:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:55.016 09:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:55.016 09:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:55.016 09:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:55.016 09:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:55.275 09:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NGZiODkzOTIyN2Y0NzEwNDU3ZDM1MzFlYzIyNDdlM2YaZLfC: 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:56.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:56.214 09:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:56.474 00:25:56.474 09:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:56.474 09:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:56.474 09:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:56.734 09:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.734 09:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:56.734 09:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.734 09:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:56.734 09:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.734 09:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:56.734 { 00:25:56.734 "cntlid": 117, 00:25:56.734 "qid": 0, 00:25:56.734 "state": "enabled", 00:25:56.734 "listen_address": { 00:25:56.734 "trtype": "TCP", 00:25:56.734 "adrfam": "IPv4", 00:25:56.734 "traddr": "10.0.0.2", 00:25:56.734 "trsvcid": "4420" 00:25:56.734 }, 00:25:56.734 "peer_address": { 00:25:56.734 "trtype": "TCP", 00:25:56.734 "adrfam": "IPv4", 00:25:56.735 "traddr": "10.0.0.1", 00:25:56.735 "trsvcid": "55722" 00:25:56.735 }, 00:25:56.735 "auth": { 00:25:56.735 "state": "completed", 00:25:56.735 "digest": "sha512", 00:25:56.735 "dhgroup": "ffdhe3072" 00:25:56.735 } 00:25:56.735 } 00:25:56.735 ]' 00:25:56.735 09:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:56.735 09:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:56.735 09:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:56.735 09:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:56.735 09:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:56.735 09:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:56.735 09:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:56.735 09:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:56.996 09:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:NzQyYTM0ZmI5NzI4Nzg4ZjgzZjIxOWE1N2JhYjk5NTgzYzhkZTRlMWY0MTVjODIwkAOM9w==: 00:25:57.567 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:57.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:57.567 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:57.567 09:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.567 09:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:57.567 09:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.567 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:57.567 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:57.567 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:57.827 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:25:57.827 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:57.827 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:57.827 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:57.827 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:57.827 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:25:57.827 09:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.827 09:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:57.827 09:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.827 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:57.827 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:58.087 00:25:58.088 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:58.088 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:58.088 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:58.347 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.347 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:58.347 09:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.347 09:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:58.347 09:38:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.347 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:58.347 { 00:25:58.347 "cntlid": 119, 00:25:58.347 "qid": 0, 00:25:58.347 "state": "enabled", 00:25:58.347 "listen_address": { 00:25:58.347 "trtype": "TCP", 00:25:58.347 "adrfam": "IPv4", 00:25:58.347 "traddr": "10.0.0.2", 00:25:58.347 "trsvcid": "4420" 00:25:58.347 }, 00:25:58.347 "peer_address": { 00:25:58.347 "trtype": "TCP", 00:25:58.347 "adrfam": "IPv4", 00:25:58.347 "traddr": "10.0.0.1", 00:25:58.347 "trsvcid": "55750" 00:25:58.347 }, 00:25:58.347 "auth": { 00:25:58.347 "state": "completed", 00:25:58.347 "digest": "sha512", 00:25:58.347 "dhgroup": "ffdhe3072" 00:25:58.347 } 00:25:58.347 } 00:25:58.347 ]' 00:25:58.347 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:58.347 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:58.348 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:58.348 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:58.348 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:58.348 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:58.348 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:58.348 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:58.607 09:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:YmE3ODQyZDcwZGI5N2RmOTUyNTRkMDM5NzBlMmU0ZTUwMTg4MGFkZWQ4NzMzYjc0ODUyY2UxZTI5NmRhM2ZhM3KzMj0=: 00:25:59.178 09:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:59.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:59.178 09:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:59.178 09:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.178 09:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:59.178 09:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.178 09:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:25:59.178 09:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:59.178 09:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:59.178 09:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:59.439 09:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:25:59.439 09:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:59.439 09:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:59.439 09:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:59.439 09:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:59.439 09:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:25:59.439 09:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.439 09:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:59.439 09:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.439 09:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:59.439 09:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:59.699 00:25:59.700 09:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:59.700 09:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:59.700 09:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:59.959 09:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.959 09:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:59.959 09:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.959 09:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:59.959 09:38:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.959 09:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:59.959 { 00:25:59.959 "cntlid": 121, 00:25:59.959 "qid": 0, 00:25:59.959 "state": "enabled", 00:25:59.959 "listen_address": { 00:25:59.959 "trtype": "TCP", 00:25:59.959 "adrfam": "IPv4", 00:25:59.959 "traddr": "10.0.0.2", 00:25:59.959 "trsvcid": "4420" 00:25:59.959 }, 00:25:59.959 "peer_address": { 00:25:59.959 "trtype": "TCP", 00:25:59.959 "adrfam": "IPv4", 00:25:59.959 "traddr": "10.0.0.1", 00:25:59.959 "trsvcid": "55764" 00:25:59.959 }, 00:25:59.959 "auth": { 00:25:59.959 "state": "completed", 00:25:59.960 "digest": "sha512", 00:25:59.960 "dhgroup": "ffdhe4096" 00:25:59.960 } 00:25:59.960 } 00:25:59.960 ]' 00:25:59.960 09:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:59.960 09:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:59.960 09:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:59.960 09:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:59.960 09:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:59.960 09:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:59.960 09:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:59.960 09:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:00.220 09:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:26:00.790 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:01.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:01.051 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:01.051 09:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.051 09:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.051 09:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.051 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:26:01.051 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:01.051 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:01.051 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:26:01.051 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:26:01.051 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:01.051 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:26:01.051 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:01.051 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:26:01.051 09:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.051 09:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.051 09:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.051 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:26:01.051 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:26:01.313 00:26:01.313 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:26:01.313 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:01.313 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:26:01.573 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.574 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:01.574 09:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.574 09:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:01.574 09:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.574 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:26:01.574 { 00:26:01.574 "cntlid": 123, 00:26:01.574 "qid": 0, 00:26:01.574 "state": "enabled", 00:26:01.574 "listen_address": { 00:26:01.574 "trtype": "TCP", 00:26:01.574 "adrfam": "IPv4", 00:26:01.574 "traddr": "10.0.0.2", 00:26:01.574 "trsvcid": "4420" 00:26:01.574 }, 00:26:01.574 "peer_address": { 00:26:01.574 "trtype": "TCP", 00:26:01.574 "adrfam": "IPv4", 00:26:01.574 "traddr": "10.0.0.1", 00:26:01.574 "trsvcid": "55802" 00:26:01.574 }, 00:26:01.574 "auth": { 00:26:01.574 "state": "completed", 00:26:01.574 "digest": "sha512", 00:26:01.574 "dhgroup": "ffdhe4096" 00:26:01.574 } 00:26:01.574 } 00:26:01.574 ]' 00:26:01.574 09:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:26:01.574 09:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:01.574 09:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:26:01.574 09:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:01.574 09:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:26:01.574 09:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:01.574 09:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:01.574 09:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:01.834 09:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NGZiODkzOTIyN2Y0NzEwNDU3ZDM1MzFlYzIyNDdlM2YaZLfC: 00:26:02.777 09:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:02.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:02.777 09:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:02.777 09:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.777 09:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:02.777 09:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.777 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:26:02.777 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:02.777 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:02.777 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:26:02.777 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:26:02.777 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:02.777 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:26:02.777 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:02.777 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 00:26:02.777 09:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.777 09:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:02.777 09:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.777 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:26:02.777 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:26:03.038 00:26:03.038 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:26:03.038 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:26:03.038 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:03.038 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.298 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:03.298 09:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.298 09:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:03.298 09:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.298 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:26:03.298 { 00:26:03.298 "cntlid": 125, 00:26:03.298 "qid": 0, 00:26:03.298 "state": "enabled", 00:26:03.298 "listen_address": { 00:26:03.298 "trtype": "TCP", 00:26:03.299 "adrfam": "IPv4", 00:26:03.299 "traddr": "10.0.0.2", 00:26:03.299 "trsvcid": "4420" 00:26:03.299 }, 00:26:03.299 "peer_address": { 00:26:03.299 "trtype": "TCP", 00:26:03.299 "adrfam": "IPv4", 00:26:03.299 "traddr": "10.0.0.1", 00:26:03.299 "trsvcid": "55034" 00:26:03.299 }, 00:26:03.299 "auth": { 00:26:03.299 "state": "completed", 00:26:03.299 "digest": "sha512", 00:26:03.299 "dhgroup": "ffdhe4096" 00:26:03.299 } 00:26:03.299 } 00:26:03.299 ]' 00:26:03.299 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:26:03.299 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:03.299 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:26:03.299 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:03.299 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:26:03.299 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:03.299 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:03.299 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:03.559 09:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:NzQyYTM0ZmI5NzI4Nzg4ZjgzZjIxOWE1N2JhYjk5NTgzYzhkZTRlMWY0MTVjODIwkAOM9w==: 00:26:04.130 09:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:04.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:04.130 09:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:04.130 09:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.130 09:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:04.130 09:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.130 09:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:26:04.130 09:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:04.130 09:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:04.393 09:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:26:04.393 09:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:26:04.393 09:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:04.393 09:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:26:04.393 09:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:04.393 09:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:26:04.393 09:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.393 09:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:04.393 09:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.393 09:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:04.393 09:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:04.653 00:26:04.653 09:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:26:04.653 09:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:26:04.653 09:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:04.914 09:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.914 09:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:04.914 09:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.914 09:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:04.914 09:38:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.914 09:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:26:04.914 { 00:26:04.914 "cntlid": 127, 00:26:04.914 "qid": 0, 00:26:04.914 "state": "enabled", 00:26:04.914 "listen_address": { 00:26:04.914 "trtype": "TCP", 00:26:04.914 "adrfam": "IPv4", 00:26:04.914 "traddr": "10.0.0.2", 00:26:04.914 "trsvcid": "4420" 00:26:04.914 }, 00:26:04.914 "peer_address": { 00:26:04.914 "trtype": "TCP", 00:26:04.914 "adrfam": "IPv4", 00:26:04.914 "traddr": "10.0.0.1", 00:26:04.914 "trsvcid": "55060" 00:26:04.914 }, 00:26:04.914 "auth": { 00:26:04.914 "state": "completed", 00:26:04.914 "digest": "sha512", 00:26:04.914 "dhgroup": "ffdhe4096" 00:26:04.914 } 00:26:04.914 } 00:26:04.914 ]' 00:26:04.914 09:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:26:04.914 09:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:04.914 09:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:26:04.914 09:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:26:04.914 09:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:26:04.914 09:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:04.914 09:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:04.914 09:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:05.175 09:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:YmE3ODQyZDcwZGI5N2RmOTUyNTRkMDM5NzBlMmU0ZTUwMTg4MGFkZWQ4NzMzYjc0ODUyY2UxZTI5NmRhM2ZhM3KzMj0=: 00:26:05.746 09:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:06.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:26:06.007 09:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:26:06.267 00:26:06.529 09:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:26:06.529 09:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:06.529 09:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:26:06.529 09:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.529 09:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:06.529 09:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.529 09:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:06.529 09:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.529 09:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:26:06.529 { 00:26:06.529 "cntlid": 129, 00:26:06.529 "qid": 0, 00:26:06.529 "state": "enabled", 00:26:06.529 "listen_address": { 00:26:06.529 "trtype": "TCP", 00:26:06.529 "adrfam": "IPv4", 00:26:06.529 "traddr": "10.0.0.2", 00:26:06.529 "trsvcid": "4420" 00:26:06.529 }, 00:26:06.529 "peer_address": { 00:26:06.529 "trtype": "TCP", 00:26:06.529 "adrfam": "IPv4", 00:26:06.529 "traddr": "10.0.0.1", 00:26:06.529 "trsvcid": "55090" 00:26:06.529 }, 00:26:06.529 "auth": { 00:26:06.529 "state": "completed", 00:26:06.529 "digest": "sha512", 00:26:06.529 "dhgroup": "ffdhe6144" 00:26:06.529 } 00:26:06.529 } 00:26:06.529 ]' 00:26:06.529 09:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:26:06.529 09:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:06.529 09:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:26:06.789 09:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:06.789 09:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:26:06.789 09:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:06.789 09:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:06.789 09:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:06.789 09:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:07.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:26:07.731 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:26:08.301 00:26:08.301 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:26:08.301 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:26:08.301 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:08.301 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.301 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:08.301 09:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.301 09:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:08.301 09:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.301 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:26:08.301 { 00:26:08.301 "cntlid": 131, 00:26:08.301 "qid": 0, 00:26:08.301 "state": "enabled", 00:26:08.301 "listen_address": { 00:26:08.301 "trtype": "TCP", 00:26:08.301 "adrfam": "IPv4", 00:26:08.301 "traddr": "10.0.0.2", 00:26:08.301 "trsvcid": "4420" 00:26:08.301 }, 00:26:08.301 "peer_address": { 00:26:08.301 "trtype": "TCP", 00:26:08.301 "adrfam": "IPv4", 00:26:08.301 "traddr": "10.0.0.1", 00:26:08.301 "trsvcid": "55128" 00:26:08.301 }, 00:26:08.301 "auth": { 00:26:08.301 "state": "completed", 00:26:08.301 "digest": "sha512", 00:26:08.301 "dhgroup": "ffdhe6144" 00:26:08.301 } 00:26:08.301 } 00:26:08.301 ]' 00:26:08.301 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:26:08.301 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:08.301 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:26:08.561 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:08.561 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:26:08.561 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:08.561 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:08.561 09:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:08.561 09:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NGZiODkzOTIyN2Y0NzEwNDU3ZDM1MzFlYzIyNDdlM2YaZLfC: 00:26:09.501 09:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:09.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:09.501 09:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:09.501 09:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.501 09:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:09.501 09:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.501 09:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:26:09.501 09:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:09.501 09:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:09.501 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:26:09.501 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:26:09.501 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:09.501 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:09.501 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:09.501 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 00:26:09.501 09:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.501 09:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:09.501 09:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.501 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:26:09.501 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:26:10.072 00:26:10.072 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:26:10.072 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:10.072 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:26:10.072 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.072 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:10.072 09:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.072 09:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:10.072 09:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.072 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:26:10.072 { 00:26:10.072 "cntlid": 133, 00:26:10.072 "qid": 0, 00:26:10.072 "state": "enabled", 00:26:10.072 "listen_address": { 00:26:10.072 "trtype": "TCP", 00:26:10.072 "adrfam": "IPv4", 00:26:10.072 "traddr": "10.0.0.2", 00:26:10.072 "trsvcid": "4420" 00:26:10.072 }, 00:26:10.072 "peer_address": { 00:26:10.072 "trtype": "TCP", 00:26:10.072 "adrfam": "IPv4", 00:26:10.072 "traddr": "10.0.0.1", 00:26:10.072 "trsvcid": "55156" 00:26:10.072 }, 00:26:10.072 "auth": { 00:26:10.072 "state": "completed", 00:26:10.072 "digest": "sha512", 00:26:10.072 "dhgroup": "ffdhe6144" 00:26:10.072 } 00:26:10.072 } 00:26:10.072 ]' 00:26:10.072 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:26:10.072 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:10.072 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:26:10.333 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:10.333 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:26:10.333 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:10.333 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:10.333 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:10.333 09:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:NzQyYTM0ZmI5NzI4Nzg4ZjgzZjIxOWE1N2JhYjk5NTgzYzhkZTRlMWY0MTVjODIwkAOM9w==: 00:26:11.277 09:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:11.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:11.277 09:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:11.277 09:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.277 09:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:11.277 09:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.277 09:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:26:11.277 09:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:11.277 09:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:11.277 09:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:26:11.277 09:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:26:11.277 09:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:11.277 09:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:26:11.277 09:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:11.278 09:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:26:11.278 09:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.278 09:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:11.278 09:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.278 09:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:11.278 09:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:11.850 00:26:11.850 09:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:26:11.850 09:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:26:11.850 09:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:11.850 09:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.850 09:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:11.850 09:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.851 09:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:11.851 09:39:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.851 09:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:26:11.851 { 00:26:11.851 "cntlid": 135, 00:26:11.851 "qid": 0, 00:26:11.851 "state": "enabled", 00:26:11.851 "listen_address": { 00:26:11.851 "trtype": "TCP", 00:26:11.851 "adrfam": "IPv4", 00:26:11.851 "traddr": "10.0.0.2", 00:26:11.851 "trsvcid": "4420" 00:26:11.851 }, 00:26:11.851 "peer_address": { 00:26:11.851 "trtype": "TCP", 00:26:11.851 "adrfam": "IPv4", 00:26:11.851 "traddr": "10.0.0.1", 00:26:11.851 "trsvcid": "55184" 00:26:11.851 }, 00:26:11.851 "auth": { 00:26:11.851 "state": "completed", 00:26:11.851 "digest": "sha512", 00:26:11.851 "dhgroup": "ffdhe6144" 00:26:11.851 } 00:26:11.851 } 00:26:11.851 ]' 00:26:11.851 09:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:26:11.851 09:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:11.851 09:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:26:12.111 09:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:26:12.111 09:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:26:12.111 09:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:12.111 09:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:12.111 09:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:12.111 09:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:YmE3ODQyZDcwZGI5N2RmOTUyNTRkMDM5NzBlMmU0ZTUwMTg4MGFkZWQ4NzMzYjc0ODUyY2UxZTI5NmRhM2ZhM3KzMj0=: 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:13.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:26:13.054 09:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:26:13.626 00:26:13.626 09:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:26:13.626 09:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:26:13.626 09:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:13.626 09:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.626 09:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:13.626 09:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.626 09:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:13.626 09:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.626 09:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:26:13.626 { 00:26:13.626 "cntlid": 137, 00:26:13.626 "qid": 0, 00:26:13.626 "state": "enabled", 00:26:13.626 "listen_address": { 00:26:13.626 "trtype": "TCP", 00:26:13.626 "adrfam": "IPv4", 00:26:13.626 "traddr": "10.0.0.2", 00:26:13.626 "trsvcid": "4420" 00:26:13.626 }, 00:26:13.626 "peer_address": { 00:26:13.626 "trtype": "TCP", 00:26:13.626 "adrfam": "IPv4", 00:26:13.626 "traddr": "10.0.0.1", 00:26:13.626 "trsvcid": "57302" 00:26:13.626 }, 00:26:13.626 "auth": { 00:26:13.626 "state": "completed", 00:26:13.626 "digest": "sha512", 00:26:13.626 "dhgroup": "ffdhe8192" 00:26:13.626 } 00:26:13.626 } 00:26:13.626 ]' 00:26:13.626 09:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:26:13.887 09:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:13.887 09:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:26:13.887 09:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:13.887 09:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:26:13.887 09:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:13.887 09:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:13.887 09:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:14.148 09:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:26:14.719 09:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:14.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:14.719 09:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:14.719 09:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.719 09:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:14.719 09:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.719 09:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:26:14.719 09:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:14.719 09:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:14.981 09:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:26:14.981 09:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:26:14.981 09:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:14.981 09:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:14.981 09:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:26:14.981 09:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:26:14.981 09:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.981 09:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:14.981 09:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.981 09:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:26:14.981 09:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:26:15.552 00:26:15.552 09:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:26:15.552 09:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:26:15.552 09:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:15.552 09:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.552 09:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:15.552 09:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.552 09:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:15.552 09:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.552 09:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:26:15.552 { 00:26:15.552 "cntlid": 139, 00:26:15.552 "qid": 0, 00:26:15.552 "state": "enabled", 00:26:15.552 "listen_address": { 00:26:15.552 "trtype": "TCP", 00:26:15.552 "adrfam": "IPv4", 00:26:15.552 "traddr": "10.0.0.2", 00:26:15.552 "trsvcid": "4420" 00:26:15.552 }, 00:26:15.552 "peer_address": { 00:26:15.552 "trtype": "TCP", 00:26:15.552 "adrfam": "IPv4", 00:26:15.552 "traddr": "10.0.0.1", 00:26:15.552 "trsvcid": "57332" 00:26:15.552 }, 00:26:15.552 "auth": { 00:26:15.552 "state": "completed", 00:26:15.552 "digest": "sha512", 00:26:15.552 "dhgroup": "ffdhe8192" 00:26:15.552 } 00:26:15.552 } 00:26:15.552 ]' 00:26:15.812 09:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:26:15.812 09:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:15.812 09:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:26:15.812 09:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:15.812 09:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:26:15.812 09:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:15.813 09:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:15.813 09:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:16.072 09:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:01:NGZiODkzOTIyN2Y0NzEwNDU3ZDM1MzFlYzIyNDdlM2YaZLfC: 00:26:16.643 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:16.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:16.643 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:16.644 09:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.644 09:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:16.644 09:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.644 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:26:16.644 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:16.644 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:16.644 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:26:16.644 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:26:16.644 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:16.644 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:16.644 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:26:16.644 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key2 00:26:16.644 09:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.644 09:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:16.644 09:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.644 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:26:16.644 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:26:17.217 00:26:17.217 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:26:17.217 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:26:17.217 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:17.476 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.476 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:17.476 09:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.476 09:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:17.476 09:39:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.476 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:26:17.477 { 00:26:17.477 "cntlid": 141, 00:26:17.477 "qid": 0, 00:26:17.477 "state": "enabled", 00:26:17.477 "listen_address": { 00:26:17.477 "trtype": "TCP", 00:26:17.477 "adrfam": "IPv4", 00:26:17.477 "traddr": "10.0.0.2", 00:26:17.477 "trsvcid": "4420" 00:26:17.477 }, 00:26:17.477 "peer_address": { 00:26:17.477 "trtype": "TCP", 00:26:17.477 "adrfam": "IPv4", 00:26:17.477 "traddr": "10.0.0.1", 00:26:17.477 "trsvcid": "57358" 00:26:17.477 }, 00:26:17.477 "auth": { 00:26:17.477 "state": "completed", 00:26:17.477 "digest": "sha512", 00:26:17.477 "dhgroup": "ffdhe8192" 00:26:17.477 } 00:26:17.477 } 00:26:17.477 ]' 00:26:17.477 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:26:17.477 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:17.477 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:26:17.477 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:17.477 09:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:26:17.737 09:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:17.737 09:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:17.737 09:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:17.737 09:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:02:NzQyYTM0ZmI5NzI4Nzg4ZjgzZjIxOWE1N2JhYjk5NTgzYzhkZTRlMWY0MTVjODIwkAOM9w==: 00:26:18.678 09:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:18.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:18.678 09:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:18.678 09:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.678 09:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:18.678 09:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.678 09:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:26:18.678 09:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:18.678 09:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:18.678 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:26:18.678 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:26:18.678 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:18.678 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:18.678 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:26:18.678 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key3 00:26:18.678 09:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.678 09:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:18.678 09:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.678 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:18.678 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:19.249 00:26:19.249 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:26:19.249 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:26:19.249 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:19.249 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.249 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:19.249 09:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.249 09:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:19.249 09:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.249 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:26:19.249 { 00:26:19.249 "cntlid": 143, 00:26:19.249 "qid": 0, 00:26:19.249 "state": "enabled", 00:26:19.249 "listen_address": { 00:26:19.249 "trtype": "TCP", 00:26:19.249 "adrfam": "IPv4", 00:26:19.249 "traddr": "10.0.0.2", 00:26:19.249 "trsvcid": "4420" 00:26:19.249 }, 00:26:19.249 "peer_address": { 00:26:19.249 "trtype": "TCP", 00:26:19.249 "adrfam": "IPv4", 00:26:19.249 "traddr": "10.0.0.1", 00:26:19.249 "trsvcid": "57372" 00:26:19.249 }, 00:26:19.249 "auth": { 00:26:19.249 "state": "completed", 00:26:19.249 "digest": "sha512", 00:26:19.249 "dhgroup": "ffdhe8192" 00:26:19.249 } 00:26:19.249 } 00:26:19.249 ]' 00:26:19.249 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:26:19.510 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:19.510 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:26:19.510 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:19.510 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:26:19.510 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:19.510 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:19.510 09:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:19.769 09:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:03:YmE3ODQyZDcwZGI5N2RmOTUyNTRkMDM5NzBlMmU0ZTUwMTg4MGFkZWQ4NzMzYjc0ODUyY2UxZTI5NmRhM2ZhM3KzMj0=: 00:26:20.339 09:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:20.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:20.339 09:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:20.339 09:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.339 09:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:20.339 09:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.339 09:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:26:20.339 09:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:26:20.339 09:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:26:20.339 09:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:20.339 09:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:20.339 09:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:20.598 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:26:20.598 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:26:20.598 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:26:20.598 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:26:20.598 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:26:20.598 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key0 00:26:20.598 09:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.598 09:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:20.598 09:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.598 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:26:20.598 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:26:21.168 00:26:21.168 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:26:21.168 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:21.168 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:26:21.168 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.168 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:26:21.429 09:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.429 09:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:21.429 09:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.429 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:26:21.429 { 00:26:21.429 "cntlid": 145, 00:26:21.429 "qid": 0, 00:26:21.429 "state": "enabled", 00:26:21.429 "listen_address": { 00:26:21.429 "trtype": "TCP", 00:26:21.429 "adrfam": "IPv4", 00:26:21.429 "traddr": "10.0.0.2", 00:26:21.429 "trsvcid": "4420" 00:26:21.429 }, 00:26:21.429 "peer_address": { 00:26:21.429 "trtype": "TCP", 00:26:21.429 "adrfam": "IPv4", 00:26:21.429 "traddr": "10.0.0.1", 00:26:21.429 "trsvcid": "57388" 00:26:21.429 }, 00:26:21.429 "auth": { 00:26:21.429 "state": "completed", 00:26:21.429 "digest": "sha512", 00:26:21.429 "dhgroup": "ffdhe8192" 00:26:21.429 } 00:26:21.429 } 00:26:21.429 ]' 00:26:21.429 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:26:21.429 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:26:21.429 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:26:21.429 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:26:21.429 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:26:21.429 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:26:21.429 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:21.429 09:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:21.689 09:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid 80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-secret DHHC-1:00:NzViM2FmN2M1OTBjZjUzMDYzNDI5YjFmMWU1NzVhZjEyNzVkZmI0MTlmYjdmMGQw3lNbvg==: 00:26:22.261 09:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:26:22.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:26:22.261 09:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:22.261 09:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.261 09:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:22.261 09:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.261 09:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --dhchap-key key1 00:26:22.261 09:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.261 09:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:22.261 09:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.261 09:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:26:22.261 09:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:26:22.261 09:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:26:22.261 09:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:26:22.261 09:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:22.261 09:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:26:22.261 09:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:22.261 09:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:26:22.261 09:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:26:22.831 request: 00:26:22.831 { 00:26:22.831 "name": "nvme0", 00:26:22.831 "trtype": "tcp", 00:26:22.831 "traddr": "10.0.0.2", 00:26:22.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204", 00:26:22.831 "adrfam": "ipv4", 00:26:22.831 "trsvcid": "4420", 00:26:22.831 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:22.831 "dhchap_key": "key2", 00:26:22.831 "method": "bdev_nvme_attach_controller", 00:26:22.831 "req_id": 1 00:26:22.831 } 00:26:22.831 Got JSON-RPC error response 00:26:22.831 response: 00:26:22.831 { 00:26:22.831 "code": -32602, 00:26:22.831 "message": "Invalid parameters" 00:26:22.831 } 00:26:22.831 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:26:22.831 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:22.831 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:22.831 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:22.831 09:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:22.831 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.831 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:22.832 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.832 09:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:26:22.832 09:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:26:22.832 09:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 343246 00:26:22.832 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 343246 ']' 00:26:22.832 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 343246 00:26:22.832 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:26:22.832 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:22.832 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 343246 00:26:22.832 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:22.832 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:22.832 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 343246' 00:26:22.832 killing process with pid 343246 00:26:22.832 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 343246 00:26:22.832 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 343246 00:26:23.091 09:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:26:23.091 09:39:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:23.091 09:39:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:26:23.091 09:39:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:23.091 09:39:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:26:23.091 09:39:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:23.091 09:39:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:23.091 rmmod nvme_tcp 00:26:23.091 rmmod nvme_fabrics 00:26:23.091 rmmod nvme_keyring 00:26:23.091 09:39:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:23.091 09:39:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:26:23.091 09:39:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:26:23.091 09:39:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 342901 ']' 00:26:23.091 09:39:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 342901 00:26:23.091 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 342901 ']' 00:26:23.092 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 342901 00:26:23.092 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:26:23.092 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:23.092 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 342901 00:26:23.352 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:23.352 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:23.352 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 342901' 00:26:23.352 killing process with pid 342901 00:26:23.352 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 342901 00:26:23.352 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 342901 00:26:23.352 09:39:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:23.352 09:39:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:23.352 09:39:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:23.352 09:39:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:23.352 09:39:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:23.352 09:39:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.352 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.352 09:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.354 09:39:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:25.354 09:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Yp9 /tmp/spdk.key-sha256.mG0 /tmp/spdk.key-sha384.Kxc /tmp/spdk.key-sha512.y5D /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:26:25.354 00:26:25.354 real 2m19.346s 00:26:25.354 user 5m8.327s 00:26:25.354 sys 0m20.350s 00:26:25.354 09:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:25.354 09:39:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:25.354 ************************************ 00:26:25.354 END TEST nvmf_auth_target 00:26:25.354 ************************************ 00:26:25.354 09:39:18 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:26:25.354 09:39:18 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:26:25.355 09:39:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:26:25.355 09:39:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:25.355 09:39:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:25.615 ************************************ 00:26:25.615 START TEST nvmf_bdevio_no_huge 00:26:25.615 ************************************ 00:26:25.615 09:39:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:26:25.615 * Looking for test storage... 00:26:25.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.615 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:26:25.616 09:39:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:33.761 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:33.762 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:33.762 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:33.762 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:33.762 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:33.762 09:39:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:33.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:26:33.762 00:26:33.762 --- 10.0.0.2 ping statistics --- 00:26:33.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.762 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:33.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:26:33.762 00:26:33.762 --- 10.0.0.1 ping statistics --- 00:26:33.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.762 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=374220 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 374220 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 374220 ']' 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:33.762 09:39:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:33.762 [2024-05-16 09:39:26.300371] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:26:33.762 [2024-05-16 09:39:26.300441] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:26:33.762 [2024-05-16 09:39:26.393516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:33.762 [2024-05-16 09:39:26.497195] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.762 [2024-05-16 09:39:26.497247] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.762 [2024-05-16 09:39:26.497254] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.763 [2024-05-16 09:39:26.497261] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.763 [2024-05-16 09:39:26.497271] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.763 [2024-05-16 09:39:26.497430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:33.763 [2024-05-16 09:39:26.497590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:33.763 [2024-05-16 09:39:26.497749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:33.763 [2024-05-16 09:39:26.497749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:33.763 [2024-05-16 09:39:27.146627] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:33.763 Malloc0 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:33.763 [2024-05-16 09:39:27.199863] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:33.763 [2024-05-16 09:39:27.200176] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:33.763 { 00:26:33.763 "params": { 00:26:33.763 "name": "Nvme$subsystem", 00:26:33.763 "trtype": "$TEST_TRANSPORT", 00:26:33.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:33.763 "adrfam": "ipv4", 00:26:33.763 "trsvcid": "$NVMF_PORT", 00:26:33.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:33.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:33.763 "hdgst": ${hdgst:-false}, 00:26:33.763 "ddgst": ${ddgst:-false} 00:26:33.763 }, 00:26:33.763 "method": "bdev_nvme_attach_controller" 00:26:33.763 } 00:26:33.763 EOF 00:26:33.763 )") 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:26:33.763 09:39:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:33.763 "params": { 00:26:33.763 "name": "Nvme1", 00:26:33.763 "trtype": "tcp", 00:26:33.763 "traddr": "10.0.0.2", 00:26:33.763 "adrfam": "ipv4", 00:26:33.763 "trsvcid": "4420", 00:26:33.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:33.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:33.763 "hdgst": false, 00:26:33.763 "ddgst": false 00:26:33.763 }, 00:26:33.763 "method": "bdev_nvme_attach_controller" 00:26:33.763 }' 00:26:33.763 [2024-05-16 09:39:27.255740] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:26:33.763 [2024-05-16 09:39:27.255807] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid374309 ] 00:26:34.024 [2024-05-16 09:39:27.322988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:34.024 [2024-05-16 09:39:27.419152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.024 [2024-05-16 09:39:27.419368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:34.024 [2024-05-16 09:39:27.419372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.284 I/O targets: 00:26:34.284 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:26:34.284 00:26:34.284 00:26:34.284 CUnit - A unit testing framework for C - Version 2.1-3 00:26:34.284 http://cunit.sourceforge.net/ 00:26:34.284 00:26:34.284 00:26:34.284 Suite: bdevio tests on: Nvme1n1 00:26:34.284 Test: blockdev write read block ...passed 00:26:34.284 Test: blockdev write zeroes read block ...passed 00:26:34.284 Test: blockdev write zeroes read no split ...passed 00:26:34.284 Test: blockdev write zeroes read split ...passed 00:26:34.545 Test: blockdev write zeroes read split partial ...passed 00:26:34.545 Test: blockdev reset ...[2024-05-16 09:39:27.866390] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.545 [2024-05-16 09:39:27.866451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a9a80 (9): Bad file descriptor 00:26:34.545 [2024-05-16 09:39:27.879444] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:34.545 passed 00:26:34.545 Test: blockdev write read 8 blocks ...passed 00:26:34.545 Test: blockdev write read size > 128k ...passed 00:26:34.545 Test: blockdev write read invalid size ...passed 00:26:34.545 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:34.545 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:34.545 Test: blockdev write read max offset ...passed 00:26:34.545 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:34.545 Test: blockdev writev readv 8 blocks ...passed 00:26:34.545 Test: blockdev writev readv 30 x 1block ...passed 00:26:34.545 Test: blockdev writev readv block ...passed 00:26:34.545 Test: blockdev writev readv size > 128k ...passed 00:26:34.545 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:34.545 Test: blockdev comparev and writev ...[2024-05-16 09:39:28.103978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:34.545 [2024-05-16 09:39:28.104003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:34.545 [2024-05-16 09:39:28.104013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:34.545 [2024-05-16 09:39:28.104019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.545 [2024-05-16 09:39:28.104521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:34.546 [2024-05-16 09:39:28.104533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:34.546 [2024-05-16 09:39:28.104543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:34.546 [2024-05-16 09:39:28.104548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:34.546 [2024-05-16 09:39:28.105064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:34.546 [2024-05-16 09:39:28.105073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:34.546 [2024-05-16 09:39:28.105082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:34.546 [2024-05-16 09:39:28.105087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:34.816 [2024-05-16 09:39:28.105620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:34.816 [2024-05-16 09:39:28.105629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:34.816 [2024-05-16 09:39:28.105639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:34.816 [2024-05-16 09:39:28.105644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:34.816 passed 00:26:34.816 Test: blockdev nvme passthru rw ...passed 00:26:34.816 Test: blockdev nvme passthru vendor specific ...[2024-05-16 09:39:28.190811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:34.816 [2024-05-16 09:39:28.190822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:34.816 [2024-05-16 09:39:28.191249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:34.816 [2024-05-16 09:39:28.191258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:34.816 [2024-05-16 09:39:28.191679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:34.816 [2024-05-16 09:39:28.191687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:34.816 [2024-05-16 09:39:28.192118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:34.816 [2024-05-16 09:39:28.192126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:34.816 passed 00:26:34.816 Test: blockdev nvme admin passthru ...passed 00:26:34.816 Test: blockdev copy ...passed 00:26:34.816 00:26:34.816 Run Summary: Type Total Ran Passed Failed Inactive 00:26:34.816 suites 1 1 n/a 0 0 00:26:34.816 tests 23 23 23 0 0 00:26:34.816 asserts 152 152 152 0 n/a 00:26:34.816 00:26:34.816 Elapsed time = 1.145 seconds 00:26:35.076 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:35.076 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.076 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:35.076 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.076 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:26:35.076 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:26:35.076 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:35.076 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:26:35.076 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:35.076 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:26:35.077 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:35.077 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:35.077 rmmod nvme_tcp 00:26:35.077 rmmod nvme_fabrics 00:26:35.077 rmmod nvme_keyring 00:26:35.077 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:35.077 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:26:35.077 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:26:35.077 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 374220 ']' 00:26:35.077 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 374220 00:26:35.077 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 374220 ']' 00:26:35.077 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 374220 00:26:35.077 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:26:35.077 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:35.077 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 374220 00:26:35.077 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:26:35.077 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:26:35.077 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 374220' 00:26:35.077 killing process with pid 374220 00:26:35.077 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 374220 00:26:35.077 [2024-05-16 09:39:28.628313] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:35.077 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 374220 00:26:35.337 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:35.337 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:35.337 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:35.337 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:35.337 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:35.337 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.337 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:35.337 09:39:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.880 09:39:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:37.881 00:26:37.881 real 0m11.970s 00:26:37.881 user 0m13.665s 00:26:37.881 sys 0m6.182s 00:26:37.881 09:39:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:37.881 09:39:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:26:37.881 ************************************ 00:26:37.881 END TEST nvmf_bdevio_no_huge 00:26:37.881 ************************************ 00:26:37.881 09:39:30 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:26:37.881 09:39:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:37.881 09:39:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:37.881 09:39:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.881 ************************************ 00:26:37.881 START TEST nvmf_tls 00:26:37.881 ************************************ 00:26:37.881 09:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:26:37.881 * Looking for test storage... 00:26:37.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:26:37.881 09:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:44.466 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:44.466 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:44.466 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:44.466 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:44.467 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:44.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:26:44.467 00:26:44.467 --- 10.0.0.2 ping statistics --- 00:26:44.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.467 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:26:44.467 09:39:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:26:44.467 00:26:44.467 --- 10.0.0.1 ping statistics --- 00:26:44.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.467 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:26:44.467 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.467 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:26:44.467 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:44.467 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.467 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:44.467 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:44.467 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.467 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:44.467 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:44.728 09:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:26:44.728 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:44.728 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:44.728 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:44.728 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=378703 00:26:44.728 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 378703 00:26:44.728 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:26:44.728 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 378703 ']' 00:26:44.728 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.728 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:44.728 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.728 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:44.728 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:44.728 [2024-05-16 09:39:38.113903] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:26:44.728 [2024-05-16 09:39:38.113973] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.728 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.728 [2024-05-16 09:39:38.200699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.991 [2024-05-16 09:39:38.294240] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.991 [2024-05-16 09:39:38.294297] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.991 [2024-05-16 09:39:38.294306] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.991 [2024-05-16 09:39:38.294313] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.991 [2024-05-16 09:39:38.294318] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.991 [2024-05-16 09:39:38.294344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.562 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:45.562 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:26:45.562 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:45.562 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:45.562 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:45.562 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.562 09:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:26:45.562 09:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:26:45.562 true 00:26:45.562 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:26:45.562 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:26:45.823 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:26:45.823 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:26:45.823 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:26:45.823 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:26:45.823 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:26:46.083 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:26:46.083 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:26:46.083 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:26:46.343 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:26:46.343 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:26:46.343 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:26:46.343 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:26:46.343 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:26:46.343 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:26:46.604 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:26:46.604 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:26:46.604 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:26:46.604 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:26:46.604 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:26:46.864 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:26:46.864 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:26:46.864 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:26:47.126 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:26:47.387 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.84cR5OgmdY 00:26:47.387 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:26:47.387 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.1v8gK86imo 00:26:47.387 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:47.387 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:26:47.387 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.84cR5OgmdY 00:26:47.387 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.1v8gK86imo 00:26:47.387 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:26:47.387 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:26:47.647 09:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.84cR5OgmdY 00:26:47.647 09:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.84cR5OgmdY 00:26:47.647 09:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:47.907 [2024-05-16 09:39:41.311519] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.907 09:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:48.168 09:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:48.168 [2024-05-16 09:39:41.600204] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:48.168 [2024-05-16 09:39:41.600238] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:48.168 [2024-05-16 09:39:41.600383] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.168 09:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:48.429 malloc0 00:26:48.429 09:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:48.429 09:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.84cR5OgmdY 00:26:48.690 [2024-05-16 09:39:42.035037] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:48.690 09:39:42 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.84cR5OgmdY 00:26:48.691 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.691 Initializing NVMe Controllers 00:26:58.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:58.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:58.691 Initialization complete. Launching workers. 00:26:58.691 ======================================================== 00:26:58.691 Latency(us) 00:26:58.691 Device Information : IOPS MiB/s Average min max 00:26:58.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19124.67 74.71 3346.43 1124.97 4078.41 00:26:58.691 ======================================================== 00:26:58.691 Total : 19124.67 74.71 3346.43 1124.97 4078.41 00:26:58.691 00:26:58.691 09:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.84cR5OgmdY 00:26:58.691 09:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:58.691 09:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:58.691 09:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:58.691 09:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.84cR5OgmdY' 00:26:58.691 09:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:58.691 09:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=381549 00:26:58.691 09:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:58.691 09:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 381549 /var/tmp/bdevperf.sock 00:26:58.691 09:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:58.691 09:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 381549 ']' 00:26:58.691 09:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:58.691 09:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:58.691 09:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:58.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:58.691 09:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:58.691 09:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:58.691 [2024-05-16 09:39:52.188182] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:26:58.691 [2024-05-16 09:39:52.188241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid381549 ] 00:26:58.691 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.691 [2024-05-16 09:39:52.237757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.952 [2024-05-16 09:39:52.290750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.524 09:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:59.524 09:39:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:26:59.524 09:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.84cR5OgmdY 00:26:59.786 [2024-05-16 09:39:53.100085] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:59.786 [2024-05-16 09:39:53.100149] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:59.786 TLSTESTn1 00:26:59.786 09:39:53 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:26:59.786 Running I/O for 10 seconds... 00:27:09.785 00:27:09.785 Latency(us) 00:27:09.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.785 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:09.785 Verification LBA range: start 0x0 length 0x2000 00:27:09.785 TLSTESTn1 : 10.01 3807.26 14.87 0.00 0.00 33584.43 4642.13 66409.81 00:27:09.785 =================================================================================================================== 00:27:09.785 Total : 3807.26 14.87 0.00 0.00 33584.43 4642.13 66409.81 00:27:09.785 0 00:27:09.785 09:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:09.785 09:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 381549 00:27:09.785 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 381549 ']' 00:27:09.785 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 381549 00:27:09.785 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 381549 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 381549' 00:27:10.047 killing process with pid 381549 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 381549 00:27:10.047 Received shutdown signal, test time was about 10.000000 seconds 00:27:10.047 00:27:10.047 Latency(us) 00:27:10.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.047 =================================================================================================================== 00:27:10.047 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:10.047 [2024-05-16 09:40:03.397783] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 381549 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1v8gK86imo 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1v8gK86imo 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1v8gK86imo 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.1v8gK86imo' 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=383707 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 383707 /var/tmp/bdevperf.sock 00:27:10.047 09:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:10.048 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 383707 ']' 00:27:10.048 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:10.048 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:10.048 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:10.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:10.048 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:10.048 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:10.048 [2024-05-16 09:40:03.562254] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:10.048 [2024-05-16 09:40:03.562308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383707 ] 00:27:10.048 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.308 [2024-05-16 09:40:03.611767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.308 [2024-05-16 09:40:03.662177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.881 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:10.881 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:10.881 09:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.1v8gK86imo 00:27:11.141 [2024-05-16 09:40:04.487513] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:11.142 [2024-05-16 09:40:04.487585] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:11.142 [2024-05-16 09:40:04.494072] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:11.142 [2024-05-16 09:40:04.494698] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x710db0 (107): Transport endpoint is not connected 00:27:11.142 [2024-05-16 09:40:04.495694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x710db0 (9): Bad file descriptor 00:27:11.142 [2024-05-16 09:40:04.496695] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:11.142 [2024-05-16 09:40:04.496703] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:27:11.142 [2024-05-16 09:40:04.496710] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:11.142 request: 00:27:11.142 { 00:27:11.142 "name": "TLSTEST", 00:27:11.142 "trtype": "tcp", 00:27:11.142 "traddr": "10.0.0.2", 00:27:11.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:11.142 "adrfam": "ipv4", 00:27:11.142 "trsvcid": "4420", 00:27:11.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:11.142 "psk": "/tmp/tmp.1v8gK86imo", 00:27:11.142 "method": "bdev_nvme_attach_controller", 00:27:11.142 "req_id": 1 00:27:11.142 } 00:27:11.142 Got JSON-RPC error response 00:27:11.142 response: 00:27:11.142 { 00:27:11.142 "code": -32602, 00:27:11.142 "message": "Invalid parameters" 00:27:11.142 } 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 383707 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 383707 ']' 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 383707 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 383707 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 383707' 00:27:11.142 killing process with pid 383707 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 383707 00:27:11.142 Received shutdown signal, test time was about 10.000000 seconds 00:27:11.142 00:27:11.142 Latency(us) 00:27:11.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.142 =================================================================================================================== 00:27:11.142 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:11.142 [2024-05-16 09:40:04.581918] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 383707 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.84cR5OgmdY 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.84cR5OgmdY 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.84cR5OgmdY 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.84cR5OgmdY' 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=384045 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 384045 /var/tmp/bdevperf.sock 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 384045 ']' 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:11.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:11.142 09:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:11.402 [2024-05-16 09:40:04.736081] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:11.402 [2024-05-16 09:40:04.736136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384045 ] 00:27:11.402 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.402 [2024-05-16 09:40:04.785666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.402 [2024-05-16 09:40:04.835963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:11.972 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:11.972 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:11.972 09:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.84cR5OgmdY 00:27:12.232 [2024-05-16 09:40:05.653146] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:12.232 [2024-05-16 09:40:05.653206] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:12.232 [2024-05-16 09:40:05.657369] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:27:12.232 [2024-05-16 09:40:05.657388] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:27:12.232 [2024-05-16 09:40:05.657407] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:12.232 [2024-05-16 09:40:05.658068] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177adb0 (107): Transport endpoint is not connected 00:27:12.232 [2024-05-16 09:40:05.659063] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177adb0 (9): Bad file descriptor 00:27:12.232 [2024-05-16 09:40:05.660064] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:12.232 [2024-05-16 09:40:05.660072] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:27:12.232 [2024-05-16 09:40:05.660082] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:12.232 request: 00:27:12.232 { 00:27:12.232 "name": "TLSTEST", 00:27:12.232 "trtype": "tcp", 00:27:12.232 "traddr": "10.0.0.2", 00:27:12.232 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:12.232 "adrfam": "ipv4", 00:27:12.232 "trsvcid": "4420", 00:27:12.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:12.232 "psk": "/tmp/tmp.84cR5OgmdY", 00:27:12.232 "method": "bdev_nvme_attach_controller", 00:27:12.232 "req_id": 1 00:27:12.232 } 00:27:12.232 Got JSON-RPC error response 00:27:12.232 response: 00:27:12.232 { 00:27:12.232 "code": -32602, 00:27:12.232 "message": "Invalid parameters" 00:27:12.232 } 00:27:12.232 09:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 384045 00:27:12.232 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 384045 ']' 00:27:12.232 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 384045 00:27:12.232 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:12.232 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:12.232 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 384045 00:27:12.232 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:27:12.232 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:27:12.232 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 384045' 00:27:12.233 killing process with pid 384045 00:27:12.233 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 384045 00:27:12.233 Received shutdown signal, test time was about 10.000000 seconds 00:27:12.233 00:27:12.233 Latency(us) 00:27:12.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.233 =================================================================================================================== 00:27:12.233 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:12.233 [2024-05-16 09:40:05.739134] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:12.233 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 384045 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.84cR5OgmdY 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.84cR5OgmdY 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.84cR5OgmdY 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.84cR5OgmdY' 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=384184 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 384184 /var/tmp/bdevperf.sock 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 384184 ']' 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:12.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:12.493 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:12.493 [2024-05-16 09:40:05.893987] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:12.493 [2024-05-16 09:40:05.894040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384184 ] 00:27:12.493 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.493 [2024-05-16 09:40:05.943830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.493 [2024-05-16 09:40:05.995644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:13.435 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:13.435 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:13.435 09:40:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.84cR5OgmdY 00:27:13.435 [2024-05-16 09:40:06.809010] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:13.435 [2024-05-16 09:40:06.809083] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:13.435 [2024-05-16 09:40:06.819616] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:27:13.435 [2024-05-16 09:40:06.819635] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:27:13.435 [2024-05-16 09:40:06.819654] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:13.435 [2024-05-16 09:40:06.819859] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d3db0 (107): Transport endpoint is not connected 00:27:13.435 [2024-05-16 09:40:06.820854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d3db0 (9): Bad file descriptor 00:27:13.435 [2024-05-16 09:40:06.821856] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:13.435 [2024-05-16 09:40:06.821864] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:27:13.435 [2024-05-16 09:40:06.821871] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:13.435 request: 00:27:13.435 { 00:27:13.435 "name": "TLSTEST", 00:27:13.435 "trtype": "tcp", 00:27:13.435 "traddr": "10.0.0.2", 00:27:13.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:13.435 "adrfam": "ipv4", 00:27:13.435 "trsvcid": "4420", 00:27:13.435 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:13.435 "psk": "/tmp/tmp.84cR5OgmdY", 00:27:13.435 "method": "bdev_nvme_attach_controller", 00:27:13.435 "req_id": 1 00:27:13.435 } 00:27:13.435 Got JSON-RPC error response 00:27:13.435 response: 00:27:13.435 { 00:27:13.435 "code": -32602, 00:27:13.435 "message": "Invalid parameters" 00:27:13.435 } 00:27:13.435 09:40:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 384184 00:27:13.435 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 384184 ']' 00:27:13.435 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 384184 00:27:13.435 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:13.435 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:13.435 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 384184 00:27:13.435 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:27:13.435 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:27:13.435 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 384184' 00:27:13.435 killing process with pid 384184 00:27:13.435 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 384184 00:27:13.435 Received shutdown signal, test time was about 10.000000 seconds 00:27:13.435 00:27:13.435 Latency(us) 00:27:13.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.435 =================================================================================================================== 00:27:13.435 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:13.435 [2024-05-16 09:40:06.908350] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:13.435 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 384184 00:27:13.695 09:40:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:27:13.695 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:27:13.695 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:13.695 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:13.695 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:13.695 09:40:07 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:27:13.695 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:27:13.695 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:27:13.695 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:27:13.695 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:13.695 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:27:13.695 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:13.696 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:27:13.696 09:40:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:13.696 09:40:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:27:13.696 09:40:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:27:13.696 09:40:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:27:13.696 09:40:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:13.696 09:40:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=384405 00:27:13.696 09:40:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:13.696 09:40:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 384405 /var/tmp/bdevperf.sock 00:27:13.696 09:40:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:13.696 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 384405 ']' 00:27:13.696 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:13.696 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:13.696 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:13.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:13.696 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:13.696 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:13.696 [2024-05-16 09:40:07.073616] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:13.696 [2024-05-16 09:40:07.073670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384405 ] 00:27:13.696 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.696 [2024-05-16 09:40:07.122914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.696 [2024-05-16 09:40:07.174363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:14.636 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:14.636 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:14.636 09:40:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:27:14.636 [2024-05-16 09:40:07.993972] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:14.636 [2024-05-16 09:40:07.995808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d57b0 (9): Bad file descriptor 00:27:14.636 [2024-05-16 09:40:07.996807] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.636 [2024-05-16 09:40:07.996815] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:27:14.636 [2024-05-16 09:40:07.996822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.636 request: 00:27:14.636 { 00:27:14.636 "name": "TLSTEST", 00:27:14.636 "trtype": "tcp", 00:27:14.636 "traddr": "10.0.0.2", 00:27:14.636 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:14.636 "adrfam": "ipv4", 00:27:14.636 "trsvcid": "4420", 00:27:14.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:14.636 "method": "bdev_nvme_attach_controller", 00:27:14.636 "req_id": 1 00:27:14.636 } 00:27:14.636 Got JSON-RPC error response 00:27:14.636 response: 00:27:14.636 { 00:27:14.636 "code": -32602, 00:27:14.636 "message": "Invalid parameters" 00:27:14.636 } 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 384405 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 384405 ']' 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 384405 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 384405 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 384405' 00:27:14.636 killing process with pid 384405 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 384405 00:27:14.636 Received shutdown signal, test time was about 10.000000 seconds 00:27:14.636 00:27:14.636 Latency(us) 00:27:14.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.636 =================================================================================================================== 00:27:14.636 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 384405 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 378703 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 378703 ']' 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 378703 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:14.636 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 378703 00:27:14.896 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:14.896 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:14.896 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 378703' 00:27:14.896 killing process with pid 378703 00:27:14.896 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 378703 00:27:14.896 [2024-05-16 09:40:08.239824] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:14.896 [2024-05-16 09:40:08.239852] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:14.896 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 378703 00:27:14.896 09:40:08 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:27:14.896 09:40:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:27:14.896 09:40:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:27:14.896 09:40:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:14.896 09:40:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:27:14.896 09:40:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:27:14.896 09:40:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:27:14.896 09:40:08 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:27:14.896 09:40:08 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:27:14.897 09:40:08 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.zY1a3MPwhN 00:27:14.897 09:40:08 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:27:14.897 09:40:08 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.zY1a3MPwhN 00:27:14.897 09:40:08 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:27:14.897 09:40:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:14.897 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:14.897 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:14.897 09:40:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:14.897 09:40:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=384751 00:27:14.897 09:40:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 384751 00:27:14.897 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 384751 ']' 00:27:14.897 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.897 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:14.897 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.897 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:14.897 09:40:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:15.157 [2024-05-16 09:40:08.479816] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:15.157 [2024-05-16 09:40:08.479886] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.157 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.157 [2024-05-16 09:40:08.559390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.157 [2024-05-16 09:40:08.612075] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:15.157 [2024-05-16 09:40:08.612110] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:15.157 [2024-05-16 09:40:08.612116] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:15.157 [2024-05-16 09:40:08.612120] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:15.157 [2024-05-16 09:40:08.612124] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:15.157 [2024-05-16 09:40:08.612141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.727 09:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:15.727 09:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:15.727 09:40:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:15.727 09:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:15.727 09:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:15.987 09:40:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.987 09:40:09 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.zY1a3MPwhN 00:27:15.987 09:40:09 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.zY1a3MPwhN 00:27:15.987 09:40:09 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:15.987 [2024-05-16 09:40:09.458147] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.987 09:40:09 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:27:16.248 09:40:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:27:16.248 [2024-05-16 09:40:09.758865] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:16.248 [2024-05-16 09:40:09.758905] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:16.248 [2024-05-16 09:40:09.759061] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.248 09:40:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:27:16.508 malloc0 00:27:16.508 09:40:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:27:16.508 09:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zY1a3MPwhN 00:27:16.768 [2024-05-16 09:40:10.201766] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:16.768 09:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zY1a3MPwhN 00:27:16.768 09:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:16.768 09:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:27:16.768 09:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:27:16.768 09:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.zY1a3MPwhN' 00:27:16.768 09:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:16.768 09:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:16.768 09:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=385114 00:27:16.768 09:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:16.768 09:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 385114 /var/tmp/bdevperf.sock 00:27:16.768 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 385114 ']' 00:27:16.768 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:16.768 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:16.768 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:16.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:16.768 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:16.768 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:16.769 [2024-05-16 09:40:10.249750] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:16.769 [2024-05-16 09:40:10.249800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385114 ] 00:27:16.769 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.769 [2024-05-16 09:40:10.299240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.028 [2024-05-16 09:40:10.350642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.028 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:17.028 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:17.028 09:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zY1a3MPwhN 00:27:17.028 [2024-05-16 09:40:10.570524] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:17.028 [2024-05-16 09:40:10.570587] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:17.289 TLSTESTn1 00:27:17.289 09:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:27:17.289 Running I/O for 10 seconds... 00:27:29.519 00:27:29.519 Latency(us) 00:27:29.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.519 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:29.519 Verification LBA range: start 0x0 length 0x2000 00:27:29.519 TLSTESTn1 : 10.09 4095.47 16.00 0.00 0.00 31126.63 5324.80 91313.49 00:27:29.519 =================================================================================================================== 00:27:29.519 Total : 4095.47 16.00 0.00 0.00 31126.63 5324.80 91313.49 00:27:29.519 0 00:27:29.519 09:40:20 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:29.519 09:40:20 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 385114 00:27:29.519 09:40:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 385114 ']' 00:27:29.519 09:40:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 385114 00:27:29.519 09:40:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:29.519 09:40:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:29.519 09:40:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 385114 00:27:29.519 09:40:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:27:29.519 09:40:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:27:29.519 09:40:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 385114' 00:27:29.519 killing process with pid 385114 00:27:29.519 09:40:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 385114 00:27:29.519 Received shutdown signal, test time was about 10.000000 seconds 00:27:29.519 00:27:29.519 Latency(us) 00:27:29.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.519 =================================================================================================================== 00:27:29.519 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:29.519 [2024-05-16 09:40:20.951729] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:29.519 09:40:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 385114 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.zY1a3MPwhN 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zY1a3MPwhN 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zY1a3MPwhN 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zY1a3MPwhN 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.zY1a3MPwhN' 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=387132 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 387132 /var/tmp/bdevperf.sock 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 387132 ']' 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:29.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:29.519 [2024-05-16 09:40:21.128930] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:29.519 [2024-05-16 09:40:21.128985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid387132 ] 00:27:29.519 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.519 [2024-05-16 09:40:21.178774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.519 [2024-05-16 09:40:21.231023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:29.519 09:40:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zY1a3MPwhN 00:27:29.519 [2024-05-16 09:40:22.040297] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:29.519 [2024-05-16 09:40:22.040341] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:27:29.519 [2024-05-16 09:40:22.040347] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.zY1a3MPwhN 00:27:29.519 request: 00:27:29.519 { 00:27:29.519 "name": "TLSTEST", 00:27:29.519 "trtype": "tcp", 00:27:29.519 "traddr": "10.0.0.2", 00:27:29.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:29.519 "adrfam": "ipv4", 00:27:29.519 "trsvcid": "4420", 00:27:29.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:29.519 "psk": "/tmp/tmp.zY1a3MPwhN", 00:27:29.519 "method": "bdev_nvme_attach_controller", 00:27:29.519 "req_id": 1 00:27:29.519 } 00:27:29.519 Got JSON-RPC error response 00:27:29.519 response: 00:27:29.519 { 00:27:29.519 "code": -1, 00:27:29.520 "message": "Operation not permitted" 00:27:29.520 } 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 387132 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 387132 ']' 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 387132 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 387132 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 387132' 00:27:29.520 killing process with pid 387132 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 387132 00:27:29.520 Received shutdown signal, test time was about 10.000000 seconds 00:27:29.520 00:27:29.520 Latency(us) 00:27:29.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.520 =================================================================================================================== 00:27:29.520 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 387132 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 384751 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 384751 ']' 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 384751 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 384751 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 384751' 00:27:29.520 killing process with pid 384751 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 384751 00:27:29.520 [2024-05-16 09:40:22.279618] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:29.520 [2024-05-16 09:40:22.279656] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 384751 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=387477 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 387477 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 387477 ']' 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:29.520 09:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:29.520 [2024-05-16 09:40:22.456200] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:29.520 [2024-05-16 09:40:22.456251] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.520 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.520 [2024-05-16 09:40:22.535896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.520 [2024-05-16 09:40:22.588304] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.520 [2024-05-16 09:40:22.588338] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.520 [2024-05-16 09:40:22.588344] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.520 [2024-05-16 09:40:22.588349] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.520 [2024-05-16 09:40:22.588353] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.520 [2024-05-16 09:40:22.588375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.781 09:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:29.781 09:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:29.781 09:40:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:29.782 09:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:29.782 09:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:29.782 09:40:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:29.782 09:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.zY1a3MPwhN 00:27:29.782 09:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:27:29.782 09:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.zY1a3MPwhN 00:27:29.782 09:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:27:29.782 09:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:29.782 09:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:27:29.782 09:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:29.782 09:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.zY1a3MPwhN 00:27:29.782 09:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.zY1a3MPwhN 00:27:29.782 09:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:30.042 [2024-05-16 09:40:23.406478] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.042 09:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:27:30.042 09:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:27:30.308 [2024-05-16 09:40:23.707195] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:30.308 [2024-05-16 09:40:23.707234] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:30.308 [2024-05-16 09:40:23.707386] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.308 09:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:27:30.308 malloc0 00:27:30.570 09:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:27:30.570 09:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zY1a3MPwhN 00:27:30.831 [2024-05-16 09:40:24.138013] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:27:30.831 [2024-05-16 09:40:24.138033] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:27:30.831 [2024-05-16 09:40:24.138056] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:27:30.831 request: 00:27:30.831 { 00:27:30.831 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:30.831 "host": "nqn.2016-06.io.spdk:host1", 00:27:30.831 "psk": "/tmp/tmp.zY1a3MPwhN", 00:27:30.831 "method": "nvmf_subsystem_add_host", 00:27:30.831 "req_id": 1 00:27:30.831 } 00:27:30.831 Got JSON-RPC error response 00:27:30.831 response: 00:27:30.831 { 00:27:30.831 "code": -32603, 00:27:30.831 "message": "Internal error" 00:27:30.831 } 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 387477 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 387477 ']' 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 387477 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 387477 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 387477' 00:27:30.831 killing process with pid 387477 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 387477 00:27:30.831 [2024-05-16 09:40:24.207059] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 387477 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.zY1a3MPwhN 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=387847 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 387847 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 387847 ']' 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:30.831 09:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:30.831 [2024-05-16 09:40:24.383725] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:30.831 [2024-05-16 09:40:24.383776] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.093 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.093 [2024-05-16 09:40:24.462351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.093 [2024-05-16 09:40:24.515260] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.093 [2024-05-16 09:40:24.515293] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.093 [2024-05-16 09:40:24.515298] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.093 [2024-05-16 09:40:24.515302] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.093 [2024-05-16 09:40:24.515306] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.093 [2024-05-16 09:40:24.515321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.664 09:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:31.664 09:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:31.664 09:40:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:31.664 09:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:31.664 09:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:31.664 09:40:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.664 09:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.zY1a3MPwhN 00:27:31.664 09:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.zY1a3MPwhN 00:27:31.664 09:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:31.926 [2024-05-16 09:40:25.325525] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.926 09:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:27:32.187 09:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:27:32.187 [2024-05-16 09:40:25.622228] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:32.187 [2024-05-16 09:40:25.622267] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:32.187 [2024-05-16 09:40:25.622418] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.187 09:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:27:32.447 malloc0 00:27:32.447 09:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:27:32.447 09:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zY1a3MPwhN 00:27:32.708 [2024-05-16 09:40:26.049088] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:32.708 09:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:32.708 09:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=388210 00:27:32.708 09:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:32.708 09:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 388210 /var/tmp/bdevperf.sock 00:27:32.708 09:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 388210 ']' 00:27:32.708 09:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:32.708 09:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:32.708 09:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:32.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:32.708 09:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:32.708 09:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:32.708 [2024-05-16 09:40:26.094166] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:32.708 [2024-05-16 09:40:26.094223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388210 ] 00:27:32.708 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.708 [2024-05-16 09:40:26.148038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.708 [2024-05-16 09:40:26.200773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:32.969 09:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:32.969 09:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:32.969 09:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zY1a3MPwhN 00:27:32.969 [2024-05-16 09:40:26.420520] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:32.969 [2024-05-16 09:40:26.420587] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:32.969 TLSTESTn1 00:27:32.969 09:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:27:33.230 09:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:27:33.230 "subsystems": [ 00:27:33.230 { 00:27:33.230 "subsystem": "keyring", 00:27:33.230 "config": [] 00:27:33.230 }, 00:27:33.230 { 00:27:33.230 "subsystem": "iobuf", 00:27:33.230 "config": [ 00:27:33.230 { 00:27:33.230 "method": "iobuf_set_options", 00:27:33.230 "params": { 00:27:33.230 "small_pool_count": 8192, 00:27:33.230 "large_pool_count": 1024, 00:27:33.230 "small_bufsize": 8192, 00:27:33.230 "large_bufsize": 135168 00:27:33.230 } 00:27:33.230 } 00:27:33.230 ] 00:27:33.230 }, 00:27:33.230 { 00:27:33.230 "subsystem": "sock", 00:27:33.230 "config": [ 00:27:33.230 { 00:27:33.230 "method": "sock_impl_set_options", 00:27:33.230 "params": { 00:27:33.230 "impl_name": "posix", 00:27:33.230 "recv_buf_size": 2097152, 00:27:33.230 "send_buf_size": 2097152, 00:27:33.230 "enable_recv_pipe": true, 00:27:33.230 "enable_quickack": false, 00:27:33.230 "enable_placement_id": 0, 00:27:33.230 "enable_zerocopy_send_server": true, 00:27:33.230 "enable_zerocopy_send_client": false, 00:27:33.230 "zerocopy_threshold": 0, 00:27:33.230 "tls_version": 0, 00:27:33.230 "enable_ktls": false 00:27:33.230 } 00:27:33.230 }, 00:27:33.230 { 00:27:33.230 "method": "sock_impl_set_options", 00:27:33.230 "params": { 00:27:33.231 "impl_name": "ssl", 00:27:33.231 "recv_buf_size": 4096, 00:27:33.231 "send_buf_size": 4096, 00:27:33.231 "enable_recv_pipe": true, 00:27:33.231 "enable_quickack": false, 00:27:33.231 "enable_placement_id": 0, 00:27:33.231 "enable_zerocopy_send_server": true, 00:27:33.231 "enable_zerocopy_send_client": false, 00:27:33.231 "zerocopy_threshold": 0, 00:27:33.231 "tls_version": 0, 00:27:33.231 "enable_ktls": false 00:27:33.231 } 00:27:33.231 } 00:27:33.231 ] 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "subsystem": "vmd", 00:27:33.231 "config": [] 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "subsystem": "accel", 00:27:33.231 "config": [ 00:27:33.231 { 00:27:33.231 "method": "accel_set_options", 00:27:33.231 "params": { 00:27:33.231 "small_cache_size": 128, 00:27:33.231 "large_cache_size": 16, 00:27:33.231 "task_count": 2048, 00:27:33.231 "sequence_count": 2048, 00:27:33.231 "buf_count": 2048 00:27:33.231 } 00:27:33.231 } 00:27:33.231 ] 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "subsystem": "bdev", 00:27:33.231 "config": [ 00:27:33.231 { 00:27:33.231 "method": "bdev_set_options", 00:27:33.231 "params": { 00:27:33.231 "bdev_io_pool_size": 65535, 00:27:33.231 "bdev_io_cache_size": 256, 00:27:33.231 "bdev_auto_examine": true, 00:27:33.231 "iobuf_small_cache_size": 128, 00:27:33.231 "iobuf_large_cache_size": 16 00:27:33.231 } 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "method": "bdev_raid_set_options", 00:27:33.231 "params": { 00:27:33.231 "process_window_size_kb": 1024 00:27:33.231 } 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "method": "bdev_iscsi_set_options", 00:27:33.231 "params": { 00:27:33.231 "timeout_sec": 30 00:27:33.231 } 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "method": "bdev_nvme_set_options", 00:27:33.231 "params": { 00:27:33.231 "action_on_timeout": "none", 00:27:33.231 "timeout_us": 0, 00:27:33.231 "timeout_admin_us": 0, 00:27:33.231 "keep_alive_timeout_ms": 10000, 00:27:33.231 "arbitration_burst": 0, 00:27:33.231 "low_priority_weight": 0, 00:27:33.231 "medium_priority_weight": 0, 00:27:33.231 "high_priority_weight": 0, 00:27:33.231 "nvme_adminq_poll_period_us": 10000, 00:27:33.231 "nvme_ioq_poll_period_us": 0, 00:27:33.231 "io_queue_requests": 0, 00:27:33.231 "delay_cmd_submit": true, 00:27:33.231 "transport_retry_count": 4, 00:27:33.231 "bdev_retry_count": 3, 00:27:33.231 "transport_ack_timeout": 0, 00:27:33.231 "ctrlr_loss_timeout_sec": 0, 00:27:33.231 "reconnect_delay_sec": 0, 00:27:33.231 "fast_io_fail_timeout_sec": 0, 00:27:33.231 "disable_auto_failback": false, 00:27:33.231 "generate_uuids": false, 00:27:33.231 "transport_tos": 0, 00:27:33.231 "nvme_error_stat": false, 00:27:33.231 "rdma_srq_size": 0, 00:27:33.231 "io_path_stat": false, 00:27:33.231 "allow_accel_sequence": false, 00:27:33.231 "rdma_max_cq_size": 0, 00:27:33.231 "rdma_cm_event_timeout_ms": 0, 00:27:33.231 "dhchap_digests": [ 00:27:33.231 "sha256", 00:27:33.231 "sha384", 00:27:33.231 "sha512" 00:27:33.231 ], 00:27:33.231 "dhchap_dhgroups": [ 00:27:33.231 "null", 00:27:33.231 "ffdhe2048", 00:27:33.231 "ffdhe3072", 00:27:33.231 "ffdhe4096", 00:27:33.231 "ffdhe6144", 00:27:33.231 "ffdhe8192" 00:27:33.231 ] 00:27:33.231 } 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "method": "bdev_nvme_set_hotplug", 00:27:33.231 "params": { 00:27:33.231 "period_us": 100000, 00:27:33.231 "enable": false 00:27:33.231 } 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "method": "bdev_malloc_create", 00:27:33.231 "params": { 00:27:33.231 "name": "malloc0", 00:27:33.231 "num_blocks": 8192, 00:27:33.231 "block_size": 4096, 00:27:33.231 "physical_block_size": 4096, 00:27:33.231 "uuid": "77986083-1d7a-49d7-943d-3227f5020dc8", 00:27:33.231 "optimal_io_boundary": 0 00:27:33.231 } 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "method": "bdev_wait_for_examine" 00:27:33.231 } 00:27:33.231 ] 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "subsystem": "nbd", 00:27:33.231 "config": [] 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "subsystem": "scheduler", 00:27:33.231 "config": [ 00:27:33.231 { 00:27:33.231 "method": "framework_set_scheduler", 00:27:33.231 "params": { 00:27:33.231 "name": "static" 00:27:33.231 } 00:27:33.231 } 00:27:33.231 ] 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "subsystem": "nvmf", 00:27:33.231 "config": [ 00:27:33.231 { 00:27:33.231 "method": "nvmf_set_config", 00:27:33.231 "params": { 00:27:33.231 "discovery_filter": "match_any", 00:27:33.231 "admin_cmd_passthru": { 00:27:33.231 "identify_ctrlr": false 00:27:33.231 } 00:27:33.231 } 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "method": "nvmf_set_max_subsystems", 00:27:33.231 "params": { 00:27:33.231 "max_subsystems": 1024 00:27:33.231 } 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "method": "nvmf_set_crdt", 00:27:33.231 "params": { 00:27:33.231 "crdt1": 0, 00:27:33.231 "crdt2": 0, 00:27:33.231 "crdt3": 0 00:27:33.231 } 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "method": "nvmf_create_transport", 00:27:33.231 "params": { 00:27:33.231 "trtype": "TCP", 00:27:33.231 "max_queue_depth": 128, 00:27:33.231 "max_io_qpairs_per_ctrlr": 127, 00:27:33.231 "in_capsule_data_size": 4096, 00:27:33.231 "max_io_size": 131072, 00:27:33.231 "io_unit_size": 131072, 00:27:33.231 "max_aq_depth": 128, 00:27:33.231 "num_shared_buffers": 511, 00:27:33.231 "buf_cache_size": 4294967295, 00:27:33.231 "dif_insert_or_strip": false, 00:27:33.231 "zcopy": false, 00:27:33.231 "c2h_success": false, 00:27:33.231 "sock_priority": 0, 00:27:33.231 "abort_timeout_sec": 1, 00:27:33.231 "ack_timeout": 0, 00:27:33.231 "data_wr_pool_size": 0 00:27:33.231 } 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "method": "nvmf_create_subsystem", 00:27:33.231 "params": { 00:27:33.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:33.231 "allow_any_host": false, 00:27:33.231 "serial_number": "SPDK00000000000001", 00:27:33.231 "model_number": "SPDK bdev Controller", 00:27:33.231 "max_namespaces": 10, 00:27:33.231 "min_cntlid": 1, 00:27:33.231 "max_cntlid": 65519, 00:27:33.231 "ana_reporting": false 00:27:33.231 } 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "method": "nvmf_subsystem_add_host", 00:27:33.231 "params": { 00:27:33.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:33.231 "host": "nqn.2016-06.io.spdk:host1", 00:27:33.231 "psk": "/tmp/tmp.zY1a3MPwhN" 00:27:33.231 } 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "method": "nvmf_subsystem_add_ns", 00:27:33.231 "params": { 00:27:33.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:33.231 "namespace": { 00:27:33.231 "nsid": 1, 00:27:33.231 "bdev_name": "malloc0", 00:27:33.231 "nguid": "779860831D7A49D7943D3227F5020DC8", 00:27:33.231 "uuid": "77986083-1d7a-49d7-943d-3227f5020dc8", 00:27:33.231 "no_auto_visible": false 00:27:33.231 } 00:27:33.231 } 00:27:33.231 }, 00:27:33.231 { 00:27:33.231 "method": "nvmf_subsystem_add_listener", 00:27:33.231 "params": { 00:27:33.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:33.231 "listen_address": { 00:27:33.231 "trtype": "TCP", 00:27:33.231 "adrfam": "IPv4", 00:27:33.231 "traddr": "10.0.0.2", 00:27:33.231 "trsvcid": "4420" 00:27:33.231 }, 00:27:33.231 "secure_channel": true 00:27:33.231 } 00:27:33.231 } 00:27:33.231 ] 00:27:33.231 } 00:27:33.231 ] 00:27:33.231 }' 00:27:33.231 09:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:27:33.492 09:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:27:33.492 "subsystems": [ 00:27:33.492 { 00:27:33.492 "subsystem": "keyring", 00:27:33.492 "config": [] 00:27:33.492 }, 00:27:33.492 { 00:27:33.492 "subsystem": "iobuf", 00:27:33.492 "config": [ 00:27:33.492 { 00:27:33.492 "method": "iobuf_set_options", 00:27:33.492 "params": { 00:27:33.492 "small_pool_count": 8192, 00:27:33.492 "large_pool_count": 1024, 00:27:33.492 "small_bufsize": 8192, 00:27:33.492 "large_bufsize": 135168 00:27:33.492 } 00:27:33.492 } 00:27:33.492 ] 00:27:33.492 }, 00:27:33.492 { 00:27:33.492 "subsystem": "sock", 00:27:33.492 "config": [ 00:27:33.492 { 00:27:33.492 "method": "sock_impl_set_options", 00:27:33.492 "params": { 00:27:33.492 "impl_name": "posix", 00:27:33.492 "recv_buf_size": 2097152, 00:27:33.492 "send_buf_size": 2097152, 00:27:33.492 "enable_recv_pipe": true, 00:27:33.492 "enable_quickack": false, 00:27:33.492 "enable_placement_id": 0, 00:27:33.492 "enable_zerocopy_send_server": true, 00:27:33.492 "enable_zerocopy_send_client": false, 00:27:33.492 "zerocopy_threshold": 0, 00:27:33.492 "tls_version": 0, 00:27:33.492 "enable_ktls": false 00:27:33.492 } 00:27:33.492 }, 00:27:33.492 { 00:27:33.492 "method": "sock_impl_set_options", 00:27:33.492 "params": { 00:27:33.492 "impl_name": "ssl", 00:27:33.492 "recv_buf_size": 4096, 00:27:33.492 "send_buf_size": 4096, 00:27:33.492 "enable_recv_pipe": true, 00:27:33.492 "enable_quickack": false, 00:27:33.492 "enable_placement_id": 0, 00:27:33.492 "enable_zerocopy_send_server": true, 00:27:33.492 "enable_zerocopy_send_client": false, 00:27:33.492 "zerocopy_threshold": 0, 00:27:33.492 "tls_version": 0, 00:27:33.492 "enable_ktls": false 00:27:33.492 } 00:27:33.492 } 00:27:33.492 ] 00:27:33.492 }, 00:27:33.492 { 00:27:33.492 "subsystem": "vmd", 00:27:33.492 "config": [] 00:27:33.492 }, 00:27:33.492 { 00:27:33.492 "subsystem": "accel", 00:27:33.492 "config": [ 00:27:33.492 { 00:27:33.492 "method": "accel_set_options", 00:27:33.492 "params": { 00:27:33.492 "small_cache_size": 128, 00:27:33.492 "large_cache_size": 16, 00:27:33.492 "task_count": 2048, 00:27:33.492 "sequence_count": 2048, 00:27:33.492 "buf_count": 2048 00:27:33.492 } 00:27:33.492 } 00:27:33.492 ] 00:27:33.492 }, 00:27:33.492 { 00:27:33.492 "subsystem": "bdev", 00:27:33.492 "config": [ 00:27:33.492 { 00:27:33.492 "method": "bdev_set_options", 00:27:33.492 "params": { 00:27:33.492 "bdev_io_pool_size": 65535, 00:27:33.492 "bdev_io_cache_size": 256, 00:27:33.492 "bdev_auto_examine": true, 00:27:33.492 "iobuf_small_cache_size": 128, 00:27:33.492 "iobuf_large_cache_size": 16 00:27:33.493 } 00:27:33.493 }, 00:27:33.493 { 00:27:33.493 "method": "bdev_raid_set_options", 00:27:33.493 "params": { 00:27:33.493 "process_window_size_kb": 1024 00:27:33.493 } 00:27:33.493 }, 00:27:33.493 { 00:27:33.493 "method": "bdev_iscsi_set_options", 00:27:33.493 "params": { 00:27:33.493 "timeout_sec": 30 00:27:33.493 } 00:27:33.493 }, 00:27:33.493 { 00:27:33.493 "method": "bdev_nvme_set_options", 00:27:33.493 "params": { 00:27:33.493 "action_on_timeout": "none", 00:27:33.493 "timeout_us": 0, 00:27:33.493 "timeout_admin_us": 0, 00:27:33.493 "keep_alive_timeout_ms": 10000, 00:27:33.493 "arbitration_burst": 0, 00:27:33.493 "low_priority_weight": 0, 00:27:33.493 "medium_priority_weight": 0, 00:27:33.493 "high_priority_weight": 0, 00:27:33.493 "nvme_adminq_poll_period_us": 10000, 00:27:33.493 "nvme_ioq_poll_period_us": 0, 00:27:33.493 "io_queue_requests": 512, 00:27:33.493 "delay_cmd_submit": true, 00:27:33.493 "transport_retry_count": 4, 00:27:33.493 "bdev_retry_count": 3, 00:27:33.493 "transport_ack_timeout": 0, 00:27:33.493 "ctrlr_loss_timeout_sec": 0, 00:27:33.493 "reconnect_delay_sec": 0, 00:27:33.493 "fast_io_fail_timeout_sec": 0, 00:27:33.493 "disable_auto_failback": false, 00:27:33.493 "generate_uuids": false, 00:27:33.493 "transport_tos": 0, 00:27:33.493 "nvme_error_stat": false, 00:27:33.493 "rdma_srq_size": 0, 00:27:33.493 "io_path_stat": false, 00:27:33.493 "allow_accel_sequence": false, 00:27:33.493 "rdma_max_cq_size": 0, 00:27:33.493 "rdma_cm_event_timeout_ms": 0, 00:27:33.493 "dhchap_digests": [ 00:27:33.493 "sha256", 00:27:33.493 "sha384", 00:27:33.493 "sha512" 00:27:33.493 ], 00:27:33.493 "dhchap_dhgroups": [ 00:27:33.493 "null", 00:27:33.493 "ffdhe2048", 00:27:33.493 "ffdhe3072", 00:27:33.493 "ffdhe4096", 00:27:33.493 "ffdhe6144", 00:27:33.493 "ffdhe8192" 00:27:33.493 ] 00:27:33.493 } 00:27:33.493 }, 00:27:33.493 { 00:27:33.493 "method": "bdev_nvme_attach_controller", 00:27:33.493 "params": { 00:27:33.493 "name": "TLSTEST", 00:27:33.493 "trtype": "TCP", 00:27:33.493 "adrfam": "IPv4", 00:27:33.493 "traddr": "10.0.0.2", 00:27:33.493 "trsvcid": "4420", 00:27:33.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:33.493 "prchk_reftag": false, 00:27:33.493 "prchk_guard": false, 00:27:33.493 "ctrlr_loss_timeout_sec": 0, 00:27:33.493 "reconnect_delay_sec": 0, 00:27:33.493 "fast_io_fail_timeout_sec": 0, 00:27:33.493 "psk": "/tmp/tmp.zY1a3MPwhN", 00:27:33.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:33.493 "hdgst": false, 00:27:33.493 "ddgst": false 00:27:33.493 } 00:27:33.493 }, 00:27:33.493 { 00:27:33.493 "method": "bdev_nvme_set_hotplug", 00:27:33.493 "params": { 00:27:33.493 "period_us": 100000, 00:27:33.493 "enable": false 00:27:33.493 } 00:27:33.493 }, 00:27:33.493 { 00:27:33.493 "method": "bdev_wait_for_examine" 00:27:33.493 } 00:27:33.493 ] 00:27:33.493 }, 00:27:33.493 { 00:27:33.493 "subsystem": "nbd", 00:27:33.493 "config": [] 00:27:33.493 } 00:27:33.493 ] 00:27:33.493 }' 00:27:33.493 09:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 388210 00:27:33.493 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 388210 ']' 00:27:33.493 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 388210 00:27:33.493 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:33.493 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:33.493 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 388210 00:27:33.754 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:27:33.754 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:27:33.754 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 388210' 00:27:33.754 killing process with pid 388210 00:27:33.754 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 388210 00:27:33.754 Received shutdown signal, test time was about 10.000000 seconds 00:27:33.754 00:27:33.754 Latency(us) 00:27:33.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.754 =================================================================================================================== 00:27:33.754 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:33.754 [2024-05-16 09:40:27.060467] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:33.754 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 388210 00:27:33.754 09:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 387847 00:27:33.754 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 387847 ']' 00:27:33.754 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 387847 00:27:33.754 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:33.754 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:33.754 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 387847 00:27:33.754 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:33.754 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:33.754 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 387847' 00:27:33.754 killing process with pid 387847 00:27:33.754 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 387847 00:27:33.754 [2024-05-16 09:40:27.228110] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:33.754 [2024-05-16 09:40:27.228140] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:33.754 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 387847 00:27:34.015 09:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:27:34.015 09:40:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:34.015 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:34.015 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:34.015 09:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:27:34.015 "subsystems": [ 00:27:34.015 { 00:27:34.015 "subsystem": "keyring", 00:27:34.015 "config": [] 00:27:34.015 }, 00:27:34.015 { 00:27:34.015 "subsystem": "iobuf", 00:27:34.015 "config": [ 00:27:34.015 { 00:27:34.015 "method": "iobuf_set_options", 00:27:34.015 "params": { 00:27:34.015 "small_pool_count": 8192, 00:27:34.015 "large_pool_count": 1024, 00:27:34.015 "small_bufsize": 8192, 00:27:34.015 "large_bufsize": 135168 00:27:34.015 } 00:27:34.015 } 00:27:34.015 ] 00:27:34.015 }, 00:27:34.015 { 00:27:34.015 "subsystem": "sock", 00:27:34.015 "config": [ 00:27:34.015 { 00:27:34.015 "method": "sock_impl_set_options", 00:27:34.015 "params": { 00:27:34.015 "impl_name": "posix", 00:27:34.015 "recv_buf_size": 2097152, 00:27:34.015 "send_buf_size": 2097152, 00:27:34.015 "enable_recv_pipe": true, 00:27:34.015 "enable_quickack": false, 00:27:34.015 "enable_placement_id": 0, 00:27:34.015 "enable_zerocopy_send_server": true, 00:27:34.015 "enable_zerocopy_send_client": false, 00:27:34.015 "zerocopy_threshold": 0, 00:27:34.015 "tls_version": 0, 00:27:34.015 "enable_ktls": false 00:27:34.015 } 00:27:34.015 }, 00:27:34.015 { 00:27:34.015 "method": "sock_impl_set_options", 00:27:34.015 "params": { 00:27:34.015 "impl_name": "ssl", 00:27:34.015 "recv_buf_size": 4096, 00:27:34.015 "send_buf_size": 4096, 00:27:34.015 "enable_recv_pipe": true, 00:27:34.015 "enable_quickack": false, 00:27:34.015 "enable_placement_id": 0, 00:27:34.015 "enable_zerocopy_send_server": true, 00:27:34.015 "enable_zerocopy_send_client": false, 00:27:34.015 "zerocopy_threshold": 0, 00:27:34.015 "tls_version": 0, 00:27:34.015 "enable_ktls": false 00:27:34.015 } 00:27:34.015 } 00:27:34.015 ] 00:27:34.015 }, 00:27:34.015 { 00:27:34.015 "subsystem": "vmd", 00:27:34.015 "config": [] 00:27:34.015 }, 00:27:34.015 { 00:27:34.015 "subsystem": "accel", 00:27:34.015 "config": [ 00:27:34.015 { 00:27:34.015 "method": "accel_set_options", 00:27:34.015 "params": { 00:27:34.015 "small_cache_size": 128, 00:27:34.015 "large_cache_size": 16, 00:27:34.015 "task_count": 2048, 00:27:34.015 "sequence_count": 2048, 00:27:34.015 "buf_count": 2048 00:27:34.015 } 00:27:34.015 } 00:27:34.015 ] 00:27:34.015 }, 00:27:34.015 { 00:27:34.015 "subsystem": "bdev", 00:27:34.015 "config": [ 00:27:34.015 { 00:27:34.015 "method": "bdev_set_options", 00:27:34.015 "params": { 00:27:34.015 "bdev_io_pool_size": 65535, 00:27:34.015 "bdev_io_cache_size": 256, 00:27:34.015 "bdev_auto_examine": true, 00:27:34.015 "iobuf_small_cache_size": 128, 00:27:34.015 "iobuf_large_cache_size": 16 00:27:34.015 } 00:27:34.015 }, 00:27:34.015 { 00:27:34.015 "method": "bdev_raid_set_options", 00:27:34.015 "params": { 00:27:34.015 "process_window_size_kb": 1024 00:27:34.015 } 00:27:34.015 }, 00:27:34.015 { 00:27:34.015 "method": "bdev_iscsi_set_options", 00:27:34.015 "params": { 00:27:34.015 "timeout_sec": 30 00:27:34.015 } 00:27:34.015 }, 00:27:34.015 { 00:27:34.015 "method": "bdev_nvme_set_options", 00:27:34.015 "params": { 00:27:34.015 "action_on_timeout": "none", 00:27:34.015 "timeout_us": 0, 00:27:34.015 "timeout_admin_us": 0, 00:27:34.015 "keep_alive_timeout_ms": 10000, 00:27:34.015 "arbitration_burst": 0, 00:27:34.015 "low_priority_weight": 0, 00:27:34.015 "medium_priority_weight": 0, 00:27:34.015 "high_priority_weight": 0, 00:27:34.015 "nvme_adminq_poll_period_us": 10000, 00:27:34.015 "nvme_ioq_poll_period_us": 0, 00:27:34.015 "io_queue_requests": 0, 00:27:34.015 "delay_cmd_submit": true, 00:27:34.015 "transport_retry_count": 4, 00:27:34.015 "bdev_retry_count": 3, 00:27:34.015 "transport_ack_timeout": 0, 00:27:34.015 "ctrlr_loss_timeout_sec": 0, 00:27:34.015 "reconnect_delay_sec": 0, 00:27:34.015 "fast_io_fail_timeout_sec": 0, 00:27:34.015 "disable_auto_failback": false, 00:27:34.015 "generate_uuids": false, 00:27:34.015 "transport_tos": 0, 00:27:34.016 "nvme_error_stat": false, 00:27:34.016 "rdma_srq_size": 0, 00:27:34.016 "io_path_stat": false, 00:27:34.016 "allow_accel_sequence": false, 00:27:34.016 "rdma_max_cq_size": 0, 00:27:34.016 "rdma_cm_event_timeout_ms": 0, 00:27:34.016 "dhchap_digests": [ 00:27:34.016 "sha256", 00:27:34.016 "sha384", 00:27:34.016 "sha512" 00:27:34.016 ], 00:27:34.016 "dhchap_dhgroups": [ 00:27:34.016 "null", 00:27:34.016 "ffdhe2048", 00:27:34.016 "ffdhe3072", 00:27:34.016 "ffdhe4096", 00:27:34.016 "ffdhe6144", 00:27:34.016 "ffdhe8192" 00:27:34.016 ] 00:27:34.016 } 00:27:34.016 }, 00:27:34.016 { 00:27:34.016 "method": "bdev_nvme_set_hotplug", 00:27:34.016 "params": { 00:27:34.016 "period_us": 100000, 00:27:34.016 "enable": false 00:27:34.016 } 00:27:34.016 }, 00:27:34.016 { 00:27:34.016 "method": "bdev_malloc_create", 00:27:34.016 "params": { 00:27:34.016 "name": "malloc0", 00:27:34.016 "num_blocks": 8192, 00:27:34.016 "block_size": 4096, 00:27:34.016 "physical_block_size": 4096, 00:27:34.016 "uuid": "77986083-1d7a-49d7-943d-3227f5020dc8", 00:27:34.016 "optimal_io_boundary": 0 00:27:34.016 } 00:27:34.016 }, 00:27:34.016 { 00:27:34.016 "method": "bdev_wait_for_examine" 00:27:34.016 } 00:27:34.016 ] 00:27:34.016 }, 00:27:34.016 { 00:27:34.016 "subsystem": "nbd", 00:27:34.016 "config": [] 00:27:34.016 }, 00:27:34.016 { 00:27:34.016 "subsystem": "scheduler", 00:27:34.016 "config": [ 00:27:34.016 { 00:27:34.016 "method": "framework_set_scheduler", 00:27:34.016 "params": { 00:27:34.016 "name": "static" 00:27:34.016 } 00:27:34.016 } 00:27:34.016 ] 00:27:34.016 }, 00:27:34.016 { 00:27:34.016 "subsystem": "nvmf", 00:27:34.016 "config": [ 00:27:34.016 { 00:27:34.016 "method": "nvmf_set_config", 00:27:34.016 "params": { 00:27:34.016 "discovery_filter": "match_any", 00:27:34.016 "admin_cmd_passthru": { 00:27:34.016 "identify_ctrlr": false 00:27:34.016 } 00:27:34.016 } 00:27:34.016 }, 00:27:34.016 { 00:27:34.016 "method": "nvmf_set_max_subsystems", 00:27:34.016 "params": { 00:27:34.016 "max_subsystems": 1024 00:27:34.016 } 00:27:34.016 }, 00:27:34.016 { 00:27:34.016 "method": "nvmf_set_crdt", 00:27:34.016 "params": { 00:27:34.016 "crdt1": 0, 00:27:34.016 "crdt2": 0, 00:27:34.016 "crdt3": 0 00:27:34.016 } 00:27:34.016 }, 00:27:34.016 { 00:27:34.016 "method": "nvmf_create_transport", 00:27:34.016 "params": { 00:27:34.016 "trtype": "TCP", 00:27:34.016 "max_queue_depth": 128, 00:27:34.016 "max_io_qpairs_per_ctrlr": 127, 00:27:34.016 "in_capsule_data_size": 4096, 00:27:34.016 "max_io_size": 131072, 00:27:34.016 "io_unit_size": 131072, 00:27:34.016 "max_aq_depth": 128, 00:27:34.016 "num_shared_buffers": 511, 00:27:34.016 "buf_cache_size": 4294967295, 00:27:34.016 "dif_insert_or_strip": false, 00:27:34.016 "zcopy": false, 00:27:34.016 "c2h_success": false, 00:27:34.016 "sock_priority": 0, 00:27:34.016 "abort_timeout_sec": 1, 00:27:34.016 "ack_timeout": 0, 00:27:34.016 "data_wr_pool_size": 0 00:27:34.016 } 00:27:34.016 }, 00:27:34.016 { 00:27:34.016 "method": "nvmf_create_subsystem", 00:27:34.016 "params": { 00:27:34.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.016 "allow_any_host": false, 00:27:34.016 "serial_number": "SPDK00000000000001", 00:27:34.016 "model_number": "SPDK bdev Controller", 00:27:34.016 "max_namespaces": 10, 00:27:34.016 "min_cntlid": 1, 00:27:34.016 "max_cntlid": 65519, 00:27:34.016 "ana_reporting": false 00:27:34.016 } 00:27:34.016 }, 00:27:34.016 { 00:27:34.016 "method": "nvmf_subsystem_add_host", 00:27:34.016 "params": { 00:27:34.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.016 "host": "nqn.2016-06.io.spdk:host1", 00:27:34.016 "psk": "/tmp/tmp.zY1a3MPwhN" 00:27:34.016 } 00:27:34.016 }, 00:27:34.016 { 00:27:34.016 "method": "nvmf_subsystem_add_ns", 00:27:34.016 "params": { 00:27:34.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.016 "namespace": { 00:27:34.016 "nsid": 1, 00:27:34.016 "bdev_name": "malloc0", 00:27:34.016 "nguid": "779860831D7A49D7943D3227F5020DC8", 00:27:34.016 "uuid": "77986083-1d7a-49d7-943d-3227f5020dc8", 00:27:34.016 "no_auto_visible": false 00:27:34.016 } 00:27:34.016 } 00:27:34.016 }, 00:27:34.016 { 00:27:34.016 "method": "nvmf_subsystem_add_listener", 00:27:34.016 "params": { 00:27:34.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.016 "listen_address": { 00:27:34.016 "trtype": "TCP", 00:27:34.016 "adrfam": "IPv4", 00:27:34.016 "traddr": "10.0.0.2", 00:27:34.016 "trsvcid": "4420" 00:27:34.016 }, 00:27:34.016 "secure_channel": true 00:27:34.016 } 00:27:34.016 } 00:27:34.016 ] 00:27:34.016 } 00:27:34.016 ] 00:27:34.016 }' 00:27:34.016 09:40:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=388557 00:27:34.016 09:40:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 388557 00:27:34.016 09:40:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:27:34.016 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 388557 ']' 00:27:34.016 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.016 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:34.016 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.016 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:34.016 09:40:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:34.016 [2024-05-16 09:40:27.406551] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:34.016 [2024-05-16 09:40:27.406606] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.016 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.016 [2024-05-16 09:40:27.488628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.016 [2024-05-16 09:40:27.542027] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.016 [2024-05-16 09:40:27.542064] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.016 [2024-05-16 09:40:27.542069] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.016 [2024-05-16 09:40:27.542074] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.016 [2024-05-16 09:40:27.542078] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.017 [2024-05-16 09:40:27.542120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.277 [2024-05-16 09:40:27.717355] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.277 [2024-05-16 09:40:27.733338] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:34.277 [2024-05-16 09:40:27.749358] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:34.277 [2024-05-16 09:40:27.749390] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:34.277 [2024-05-16 09:40:27.758342] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.848 09:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:34.848 09:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:34.848 09:40:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:34.848 09:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:34.848 09:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:34.848 09:40:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.848 09:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=388680 00:27:34.848 09:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 388680 /var/tmp/bdevperf.sock 00:27:34.848 09:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 388680 ']' 00:27:34.848 09:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:34.848 09:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:34.848 09:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:34.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:34.848 09:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:27:34.848 09:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:34.848 09:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:34.848 09:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:27:34.848 "subsystems": [ 00:27:34.848 { 00:27:34.848 "subsystem": "keyring", 00:27:34.848 "config": [] 00:27:34.848 }, 00:27:34.848 { 00:27:34.848 "subsystem": "iobuf", 00:27:34.849 "config": [ 00:27:34.849 { 00:27:34.849 "method": "iobuf_set_options", 00:27:34.849 "params": { 00:27:34.849 "small_pool_count": 8192, 00:27:34.849 "large_pool_count": 1024, 00:27:34.849 "small_bufsize": 8192, 00:27:34.849 "large_bufsize": 135168 00:27:34.849 } 00:27:34.849 } 00:27:34.849 ] 00:27:34.849 }, 00:27:34.849 { 00:27:34.849 "subsystem": "sock", 00:27:34.849 "config": [ 00:27:34.849 { 00:27:34.849 "method": "sock_impl_set_options", 00:27:34.849 "params": { 00:27:34.849 "impl_name": "posix", 00:27:34.849 "recv_buf_size": 2097152, 00:27:34.849 "send_buf_size": 2097152, 00:27:34.849 "enable_recv_pipe": true, 00:27:34.849 "enable_quickack": false, 00:27:34.849 "enable_placement_id": 0, 00:27:34.849 "enable_zerocopy_send_server": true, 00:27:34.849 "enable_zerocopy_send_client": false, 00:27:34.849 "zerocopy_threshold": 0, 00:27:34.849 "tls_version": 0, 00:27:34.849 "enable_ktls": false 00:27:34.849 } 00:27:34.849 }, 00:27:34.849 { 00:27:34.849 "method": "sock_impl_set_options", 00:27:34.849 "params": { 00:27:34.849 "impl_name": "ssl", 00:27:34.849 "recv_buf_size": 4096, 00:27:34.849 "send_buf_size": 4096, 00:27:34.849 "enable_recv_pipe": true, 00:27:34.849 "enable_quickack": false, 00:27:34.849 "enable_placement_id": 0, 00:27:34.849 "enable_zerocopy_send_server": true, 00:27:34.849 "enable_zerocopy_send_client": false, 00:27:34.849 "zerocopy_threshold": 0, 00:27:34.849 "tls_version": 0, 00:27:34.849 "enable_ktls": false 00:27:34.849 } 00:27:34.849 } 00:27:34.849 ] 00:27:34.849 }, 00:27:34.849 { 00:27:34.849 "subsystem": "vmd", 00:27:34.849 "config": [] 00:27:34.849 }, 00:27:34.849 { 00:27:34.849 "subsystem": "accel", 00:27:34.849 "config": [ 00:27:34.849 { 00:27:34.849 "method": "accel_set_options", 00:27:34.849 "params": { 00:27:34.849 "small_cache_size": 128, 00:27:34.849 "large_cache_size": 16, 00:27:34.849 "task_count": 2048, 00:27:34.849 "sequence_count": 2048, 00:27:34.849 "buf_count": 2048 00:27:34.849 } 00:27:34.849 } 00:27:34.849 ] 00:27:34.849 }, 00:27:34.849 { 00:27:34.849 "subsystem": "bdev", 00:27:34.849 "config": [ 00:27:34.849 { 00:27:34.849 "method": "bdev_set_options", 00:27:34.849 "params": { 00:27:34.849 "bdev_io_pool_size": 65535, 00:27:34.849 "bdev_io_cache_size": 256, 00:27:34.849 "bdev_auto_examine": true, 00:27:34.849 "iobuf_small_cache_size": 128, 00:27:34.849 "iobuf_large_cache_size": 16 00:27:34.849 } 00:27:34.849 }, 00:27:34.849 { 00:27:34.849 "method": "bdev_raid_set_options", 00:27:34.849 "params": { 00:27:34.849 "process_window_size_kb": 1024 00:27:34.849 } 00:27:34.849 }, 00:27:34.849 { 00:27:34.849 "method": "bdev_iscsi_set_options", 00:27:34.849 "params": { 00:27:34.849 "timeout_sec": 30 00:27:34.849 } 00:27:34.849 }, 00:27:34.849 { 00:27:34.849 "method": "bdev_nvme_set_options", 00:27:34.849 "params": { 00:27:34.849 "action_on_timeout": "none", 00:27:34.849 "timeout_us": 0, 00:27:34.849 "timeout_admin_us": 0, 00:27:34.849 "keep_alive_timeout_ms": 10000, 00:27:34.849 "arbitration_burst": 0, 00:27:34.849 "low_priority_weight": 0, 00:27:34.849 "medium_priority_weight": 0, 00:27:34.849 "high_priority_weight": 0, 00:27:34.849 "nvme_adminq_poll_period_us": 10000, 00:27:34.849 "nvme_ioq_poll_period_us": 0, 00:27:34.849 "io_queue_requests": 512, 00:27:34.849 "delay_cmd_submit": true, 00:27:34.849 "transport_retry_count": 4, 00:27:34.849 "bdev_retry_count": 3, 00:27:34.849 "transport_ack_timeout": 0, 00:27:34.849 "ctrlr_loss_timeout_sec": 0, 00:27:34.849 "reconnect_delay_sec": 0, 00:27:34.849 "fast_io_fail_timeout_sec": 0, 00:27:34.849 "disable_auto_failback": false, 00:27:34.849 "generate_uuids": false, 00:27:34.849 "transport_tos": 0, 00:27:34.849 "nvme_error_stat": false, 00:27:34.849 "rdma_srq_size": 0, 00:27:34.849 "io_path_stat": false, 00:27:34.849 "allow_accel_sequence": false, 00:27:34.849 "rdma_max_cq_size": 0, 00:27:34.849 "rdma_cm_event_timeout_ms": 0, 00:27:34.849 "dhchap_digests": [ 00:27:34.849 "sha256", 00:27:34.849 "sha384", 00:27:34.849 "sha512" 00:27:34.849 ], 00:27:34.849 "dhchap_dhgroups": [ 00:27:34.849 "null", 00:27:34.849 "ffdhe2048", 00:27:34.849 "ffdhe3072", 00:27:34.849 "ffdhe4096", 00:27:34.849 "ffdhe6144", 00:27:34.849 "ffdhe8192" 00:27:34.849 ] 00:27:34.849 } 00:27:34.849 }, 00:27:34.849 { 00:27:34.849 "method": "bdev_nvme_attach_controller", 00:27:34.849 "params": { 00:27:34.849 "name": "TLSTEST", 00:27:34.849 "trtype": "TCP", 00:27:34.849 "adrfam": "IPv4", 00:27:34.849 "traddr": "10.0.0.2", 00:27:34.849 "trsvcid": "4420", 00:27:34.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.849 "prchk_reftag": false, 00:27:34.849 "prchk_guard": false, 00:27:34.849 "ctrlr_loss_timeout_sec": 0, 00:27:34.849 "reconnect_delay_sec": 0, 00:27:34.849 "fast_io_fail_timeout_sec": 0, 00:27:34.849 "psk": "/tmp/tmp.zY1a3MPwhN", 00:27:34.849 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:34.849 "hdgst": false, 00:27:34.849 "ddgst": false 00:27:34.849 } 00:27:34.849 }, 00:27:34.849 { 00:27:34.849 "method": "bdev_nvme_set_hotplug", 00:27:34.849 "params": { 00:27:34.849 "period_us": 100000, 00:27:34.849 "enable": false 00:27:34.849 } 00:27:34.849 }, 00:27:34.849 { 00:27:34.849 "method": "bdev_wait_for_examine" 00:27:34.849 } 00:27:34.849 ] 00:27:34.849 }, 00:27:34.849 { 00:27:34.849 "subsystem": "nbd", 00:27:34.849 "config": [] 00:27:34.849 } 00:27:34.849 ] 00:27:34.849 }' 00:27:34.849 [2024-05-16 09:40:28.262796] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:34.849 [2024-05-16 09:40:28.262867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388680 ] 00:27:34.849 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.849 [2024-05-16 09:40:28.314023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.849 [2024-05-16 09:40:28.366853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:35.110 [2024-05-16 09:40:28.483547] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:35.110 [2024-05-16 09:40:28.483610] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:35.680 09:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:35.680 09:40:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:35.680 09:40:29 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:27:35.680 Running I/O for 10 seconds... 00:27:45.696 00:27:45.696 Latency(us) 00:27:45.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.696 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:45.696 Verification LBA range: start 0x0 length 0x2000 00:27:45.696 TLSTESTn1 : 10.02 5485.35 21.43 0.00 0.00 23298.68 5215.57 48059.73 00:27:45.696 =================================================================================================================== 00:27:45.696 Total : 5485.35 21.43 0.00 0.00 23298.68 5215.57 48059.73 00:27:45.696 0 00:27:45.696 09:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:45.696 09:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 388680 00:27:45.696 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 388680 ']' 00:27:45.696 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 388680 00:27:45.696 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:45.696 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:45.696 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 388680 00:27:45.696 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:27:45.696 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:27:45.696 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 388680' 00:27:45.696 killing process with pid 388680 00:27:45.696 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 388680 00:27:45.696 Received shutdown signal, test time was about 10.000000 seconds 00:27:45.696 00:27:45.696 Latency(us) 00:27:45.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.696 =================================================================================================================== 00:27:45.696 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:45.696 [2024-05-16 09:40:39.236828] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:45.696 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 388680 00:27:45.958 09:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 388557 00:27:45.958 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 388557 ']' 00:27:45.958 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 388557 00:27:45.958 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:45.958 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:45.958 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 388557 00:27:45.958 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:45.958 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:45.958 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 388557' 00:27:45.958 killing process with pid 388557 00:27:45.958 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 388557 00:27:45.958 [2024-05-16 09:40:39.401295] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:45.958 [2024-05-16 09:40:39.401332] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:45.958 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 388557 00:27:46.218 09:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:27:46.218 09:40:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:46.218 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:46.218 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:46.218 09:40:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=390925 00:27:46.218 09:40:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 390925 00:27:46.218 09:40:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:46.218 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 390925 ']' 00:27:46.218 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.219 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:46.219 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.219 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:46.219 09:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:46.219 [2024-05-16 09:40:39.579669] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:46.219 [2024-05-16 09:40:39.579718] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.219 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.219 [2024-05-16 09:40:39.644477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.219 [2024-05-16 09:40:39.707375] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.219 [2024-05-16 09:40:39.707415] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.219 [2024-05-16 09:40:39.707423] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.219 [2024-05-16 09:40:39.707430] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.219 [2024-05-16 09:40:39.707435] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.219 [2024-05-16 09:40:39.707462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.791 09:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:46.791 09:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:46.791 09:40:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:46.791 09:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:46.791 09:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:47.052 09:40:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.052 09:40:40 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.zY1a3MPwhN 00:27:47.052 09:40:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.zY1a3MPwhN 00:27:47.052 09:40:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:47.052 [2024-05-16 09:40:40.518224] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.052 09:40:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:27:47.314 09:40:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:27:47.314 [2024-05-16 09:40:40.830983] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:47.314 [2024-05-16 09:40:40.831034] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:47.314 [2024-05-16 09:40:40.831217] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.314 09:40:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:27:47.574 malloc0 00:27:47.574 09:40:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:27:47.835 09:40:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zY1a3MPwhN 00:27:47.835 [2024-05-16 09:40:41.291056] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:47.835 09:40:41 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=391288 00:27:47.835 09:40:41 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:47.835 09:40:41 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:27:47.835 09:40:41 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 391288 /var/tmp/bdevperf.sock 00:27:47.835 09:40:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 391288 ']' 00:27:47.835 09:40:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:47.835 09:40:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:47.835 09:40:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:47.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:47.835 09:40:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:47.835 09:40:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:47.835 [2024-05-16 09:40:41.358859] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:47.835 [2024-05-16 09:40:41.358909] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391288 ] 00:27:47.835 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.096 [2024-05-16 09:40:41.433916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.096 [2024-05-16 09:40:41.488448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.668 09:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:48.668 09:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:48.668 09:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zY1a3MPwhN 00:27:48.929 09:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:27:48.929 [2024-05-16 09:40:42.362666] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:48.929 nvme0n1 00:27:48.929 09:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:49.189 Running I/O for 1 seconds... 00:27:50.134 00:27:50.134 Latency(us) 00:27:50.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.134 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:50.135 Verification LBA range: start 0x0 length 0x2000 00:27:50.135 nvme0n1 : 1.09 1625.26 6.35 0.00 0.00 76118.46 6526.29 92187.31 00:27:50.135 =================================================================================================================== 00:27:50.135 Total : 1625.26 6.35 0.00 0.00 76118.46 6526.29 92187.31 00:27:50.135 0 00:27:50.135 09:40:43 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 391288 00:27:50.135 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 391288 ']' 00:27:50.135 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 391288 00:27:50.135 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:50.135 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:50.135 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 391288 00:27:50.395 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:50.395 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:50.395 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 391288' 00:27:50.395 killing process with pid 391288 00:27:50.395 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 391288 00:27:50.395 Received shutdown signal, test time was about 1.000000 seconds 00:27:50.395 00:27:50.395 Latency(us) 00:27:50.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.395 =================================================================================================================== 00:27:50.395 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:50.395 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 391288 00:27:50.395 09:40:43 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 390925 00:27:50.395 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 390925 ']' 00:27:50.395 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 390925 00:27:50.395 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:50.395 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:50.395 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 390925 00:27:50.395 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:50.395 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:50.395 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 390925' 00:27:50.395 killing process with pid 390925 00:27:50.395 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 390925 00:27:50.395 [2024-05-16 09:40:43.888192] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:50.395 [2024-05-16 09:40:43.888230] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:50.395 09:40:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 390925 00:27:50.656 09:40:44 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:27:50.656 09:40:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:50.656 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:50.656 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:50.656 09:40:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=391862 00:27:50.656 09:40:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 391862 00:27:50.656 09:40:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:50.656 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 391862 ']' 00:27:50.656 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.656 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:50.656 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.656 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:50.656 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:50.656 [2024-05-16 09:40:44.096964] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:50.656 [2024-05-16 09:40:44.097020] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:50.656 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.656 [2024-05-16 09:40:44.161526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.917 [2024-05-16 09:40:44.227044] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:50.917 [2024-05-16 09:40:44.227087] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:50.917 [2024-05-16 09:40:44.227095] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:50.917 [2024-05-16 09:40:44.227101] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:50.917 [2024-05-16 09:40:44.227107] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:50.917 [2024-05-16 09:40:44.227125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:51.485 [2024-05-16 09:40:44.893558] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.485 malloc0 00:27:51.485 [2024-05-16 09:40:44.920274] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:51.485 [2024-05-16 09:40:44.920319] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:51.485 [2024-05-16 09:40:44.920490] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=391994 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 391994 /var/tmp/bdevperf.sock 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 391994 ']' 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:51.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:51.485 09:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:51.485 [2024-05-16 09:40:45.004838] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:51.485 [2024-05-16 09:40:45.004890] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391994 ] 00:27:51.485 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.745 [2024-05-16 09:40:45.081184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.745 [2024-05-16 09:40:45.135146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.314 09:40:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:52.314 09:40:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:52.314 09:40:45 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zY1a3MPwhN 00:27:52.574 09:40:45 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:27:52.574 [2024-05-16 09:40:46.057406] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:52.574 nvme0n1 00:27:52.833 09:40:46 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:52.833 Running I/O for 1 seconds... 00:27:53.773 00:27:53.773 Latency(us) 00:27:53.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.773 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:53.773 Verification LBA range: start 0x0 length 0x2000 00:27:53.773 nvme0n1 : 1.07 3258.56 12.73 0.00 0.00 38284.44 5734.40 88255.15 00:27:53.773 =================================================================================================================== 00:27:53.773 Total : 3258.56 12.73 0.00 0.00 38284.44 5734.40 88255.15 00:27:53.773 0 00:27:53.773 09:40:47 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:27:53.773 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.773 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:54.034 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.034 09:40:47 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:27:54.034 "subsystems": [ 00:27:54.034 { 00:27:54.034 "subsystem": "keyring", 00:27:54.034 "config": [ 00:27:54.034 { 00:27:54.034 "method": "keyring_file_add_key", 00:27:54.034 "params": { 00:27:54.034 "name": "key0", 00:27:54.034 "path": "/tmp/tmp.zY1a3MPwhN" 00:27:54.034 } 00:27:54.034 } 00:27:54.034 ] 00:27:54.034 }, 00:27:54.034 { 00:27:54.034 "subsystem": "iobuf", 00:27:54.034 "config": [ 00:27:54.034 { 00:27:54.034 "method": "iobuf_set_options", 00:27:54.034 "params": { 00:27:54.034 "small_pool_count": 8192, 00:27:54.034 "large_pool_count": 1024, 00:27:54.034 "small_bufsize": 8192, 00:27:54.034 "large_bufsize": 135168 00:27:54.034 } 00:27:54.034 } 00:27:54.034 ] 00:27:54.034 }, 00:27:54.034 { 00:27:54.034 "subsystem": "sock", 00:27:54.034 "config": [ 00:27:54.034 { 00:27:54.034 "method": "sock_impl_set_options", 00:27:54.034 "params": { 00:27:54.034 "impl_name": "posix", 00:27:54.034 "recv_buf_size": 2097152, 00:27:54.034 "send_buf_size": 2097152, 00:27:54.034 "enable_recv_pipe": true, 00:27:54.034 "enable_quickack": false, 00:27:54.034 "enable_placement_id": 0, 00:27:54.034 "enable_zerocopy_send_server": true, 00:27:54.034 "enable_zerocopy_send_client": false, 00:27:54.034 "zerocopy_threshold": 0, 00:27:54.034 "tls_version": 0, 00:27:54.034 "enable_ktls": false 00:27:54.034 } 00:27:54.034 }, 00:27:54.034 { 00:27:54.034 "method": "sock_impl_set_options", 00:27:54.034 "params": { 00:27:54.034 "impl_name": "ssl", 00:27:54.034 "recv_buf_size": 4096, 00:27:54.034 "send_buf_size": 4096, 00:27:54.034 "enable_recv_pipe": true, 00:27:54.034 "enable_quickack": false, 00:27:54.034 "enable_placement_id": 0, 00:27:54.034 "enable_zerocopy_send_server": true, 00:27:54.034 "enable_zerocopy_send_client": false, 00:27:54.034 "zerocopy_threshold": 0, 00:27:54.034 "tls_version": 0, 00:27:54.034 "enable_ktls": false 00:27:54.034 } 00:27:54.034 } 00:27:54.034 ] 00:27:54.034 }, 00:27:54.034 { 00:27:54.034 "subsystem": "vmd", 00:27:54.034 "config": [] 00:27:54.034 }, 00:27:54.034 { 00:27:54.034 "subsystem": "accel", 00:27:54.034 "config": [ 00:27:54.034 { 00:27:54.034 "method": "accel_set_options", 00:27:54.034 "params": { 00:27:54.034 "small_cache_size": 128, 00:27:54.034 "large_cache_size": 16, 00:27:54.034 "task_count": 2048, 00:27:54.034 "sequence_count": 2048, 00:27:54.034 "buf_count": 2048 00:27:54.034 } 00:27:54.034 } 00:27:54.034 ] 00:27:54.034 }, 00:27:54.034 { 00:27:54.034 "subsystem": "bdev", 00:27:54.034 "config": [ 00:27:54.034 { 00:27:54.034 "method": "bdev_set_options", 00:27:54.034 "params": { 00:27:54.034 "bdev_io_pool_size": 65535, 00:27:54.034 "bdev_io_cache_size": 256, 00:27:54.034 "bdev_auto_examine": true, 00:27:54.034 "iobuf_small_cache_size": 128, 00:27:54.034 "iobuf_large_cache_size": 16 00:27:54.034 } 00:27:54.034 }, 00:27:54.034 { 00:27:54.034 "method": "bdev_raid_set_options", 00:27:54.034 "params": { 00:27:54.034 "process_window_size_kb": 1024 00:27:54.034 } 00:27:54.034 }, 00:27:54.034 { 00:27:54.034 "method": "bdev_iscsi_set_options", 00:27:54.034 "params": { 00:27:54.034 "timeout_sec": 30 00:27:54.034 } 00:27:54.034 }, 00:27:54.034 { 00:27:54.034 "method": "bdev_nvme_set_options", 00:27:54.034 "params": { 00:27:54.034 "action_on_timeout": "none", 00:27:54.034 "timeout_us": 0, 00:27:54.034 "timeout_admin_us": 0, 00:27:54.034 "keep_alive_timeout_ms": 10000, 00:27:54.034 "arbitration_burst": 0, 00:27:54.034 "low_priority_weight": 0, 00:27:54.034 "medium_priority_weight": 0, 00:27:54.034 "high_priority_weight": 0, 00:27:54.034 "nvme_adminq_poll_period_us": 10000, 00:27:54.034 "nvme_ioq_poll_period_us": 0, 00:27:54.034 "io_queue_requests": 0, 00:27:54.034 "delay_cmd_submit": true, 00:27:54.034 "transport_retry_count": 4, 00:27:54.034 "bdev_retry_count": 3, 00:27:54.034 "transport_ack_timeout": 0, 00:27:54.034 "ctrlr_loss_timeout_sec": 0, 00:27:54.034 "reconnect_delay_sec": 0, 00:27:54.034 "fast_io_fail_timeout_sec": 0, 00:27:54.034 "disable_auto_failback": false, 00:27:54.034 "generate_uuids": false, 00:27:54.034 "transport_tos": 0, 00:27:54.034 "nvme_error_stat": false, 00:27:54.034 "rdma_srq_size": 0, 00:27:54.034 "io_path_stat": false, 00:27:54.034 "allow_accel_sequence": false, 00:27:54.034 "rdma_max_cq_size": 0, 00:27:54.034 "rdma_cm_event_timeout_ms": 0, 00:27:54.034 "dhchap_digests": [ 00:27:54.034 "sha256", 00:27:54.034 "sha384", 00:27:54.034 "sha512" 00:27:54.034 ], 00:27:54.034 "dhchap_dhgroups": [ 00:27:54.034 "null", 00:27:54.034 "ffdhe2048", 00:27:54.034 "ffdhe3072", 00:27:54.034 "ffdhe4096", 00:27:54.034 "ffdhe6144", 00:27:54.034 "ffdhe8192" 00:27:54.034 ] 00:27:54.034 } 00:27:54.034 }, 00:27:54.034 { 00:27:54.034 "method": "bdev_nvme_set_hotplug", 00:27:54.034 "params": { 00:27:54.034 "period_us": 100000, 00:27:54.034 "enable": false 00:27:54.034 } 00:27:54.034 }, 00:27:54.034 { 00:27:54.034 "method": "bdev_malloc_create", 00:27:54.034 "params": { 00:27:54.034 "name": "malloc0", 00:27:54.034 "num_blocks": 8192, 00:27:54.034 "block_size": 4096, 00:27:54.034 "physical_block_size": 4096, 00:27:54.034 "uuid": "d79beb19-f139-4665-81c5-b49efa653287", 00:27:54.034 "optimal_io_boundary": 0 00:27:54.034 } 00:27:54.034 }, 00:27:54.034 { 00:27:54.034 "method": "bdev_wait_for_examine" 00:27:54.034 } 00:27:54.034 ] 00:27:54.034 }, 00:27:54.034 { 00:27:54.034 "subsystem": "nbd", 00:27:54.034 "config": [] 00:27:54.034 }, 00:27:54.034 { 00:27:54.034 "subsystem": "scheduler", 00:27:54.034 "config": [ 00:27:54.034 { 00:27:54.034 "method": "framework_set_scheduler", 00:27:54.034 "params": { 00:27:54.034 "name": "static" 00:27:54.034 } 00:27:54.034 } 00:27:54.034 ] 00:27:54.034 }, 00:27:54.034 { 00:27:54.034 "subsystem": "nvmf", 00:27:54.034 "config": [ 00:27:54.034 { 00:27:54.034 "method": "nvmf_set_config", 00:27:54.034 "params": { 00:27:54.034 "discovery_filter": "match_any", 00:27:54.034 "admin_cmd_passthru": { 00:27:54.034 "identify_ctrlr": false 00:27:54.034 } 00:27:54.034 } 00:27:54.034 }, 00:27:54.034 { 00:27:54.034 "method": "nvmf_set_max_subsystems", 00:27:54.034 "params": { 00:27:54.034 "max_subsystems": 1024 00:27:54.034 } 00:27:54.034 }, 00:27:54.034 { 00:27:54.034 "method": "nvmf_set_crdt", 00:27:54.034 "params": { 00:27:54.034 "crdt1": 0, 00:27:54.034 "crdt2": 0, 00:27:54.034 "crdt3": 0 00:27:54.035 } 00:27:54.035 }, 00:27:54.035 { 00:27:54.035 "method": "nvmf_create_transport", 00:27:54.035 "params": { 00:27:54.035 "trtype": "TCP", 00:27:54.035 "max_queue_depth": 128, 00:27:54.035 "max_io_qpairs_per_ctrlr": 127, 00:27:54.035 "in_capsule_data_size": 4096, 00:27:54.035 "max_io_size": 131072, 00:27:54.035 "io_unit_size": 131072, 00:27:54.035 "max_aq_depth": 128, 00:27:54.035 "num_shared_buffers": 511, 00:27:54.035 "buf_cache_size": 4294967295, 00:27:54.035 "dif_insert_or_strip": false, 00:27:54.035 "zcopy": false, 00:27:54.035 "c2h_success": false, 00:27:54.035 "sock_priority": 0, 00:27:54.035 "abort_timeout_sec": 1, 00:27:54.035 "ack_timeout": 0, 00:27:54.035 "data_wr_pool_size": 0 00:27:54.035 } 00:27:54.035 }, 00:27:54.035 { 00:27:54.035 "method": "nvmf_create_subsystem", 00:27:54.035 "params": { 00:27:54.035 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.035 "allow_any_host": false, 00:27:54.035 "serial_number": "00000000000000000000", 00:27:54.035 "model_number": "SPDK bdev Controller", 00:27:54.035 "max_namespaces": 32, 00:27:54.035 "min_cntlid": 1, 00:27:54.035 "max_cntlid": 65519, 00:27:54.035 "ana_reporting": false 00:27:54.035 } 00:27:54.035 }, 00:27:54.035 { 00:27:54.035 "method": "nvmf_subsystem_add_host", 00:27:54.035 "params": { 00:27:54.035 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.035 "host": "nqn.2016-06.io.spdk:host1", 00:27:54.035 "psk": "key0" 00:27:54.035 } 00:27:54.035 }, 00:27:54.035 { 00:27:54.035 "method": "nvmf_subsystem_add_ns", 00:27:54.035 "params": { 00:27:54.035 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.035 "namespace": { 00:27:54.035 "nsid": 1, 00:27:54.035 "bdev_name": "malloc0", 00:27:54.035 "nguid": "D79BEB19F139466581C5B49EFA653287", 00:27:54.035 "uuid": "d79beb19-f139-4665-81c5-b49efa653287", 00:27:54.035 "no_auto_visible": false 00:27:54.035 } 00:27:54.035 } 00:27:54.035 }, 00:27:54.035 { 00:27:54.035 "method": "nvmf_subsystem_add_listener", 00:27:54.035 "params": { 00:27:54.035 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.035 "listen_address": { 00:27:54.035 "trtype": "TCP", 00:27:54.035 "adrfam": "IPv4", 00:27:54.035 "traddr": "10.0.0.2", 00:27:54.035 "trsvcid": "4420" 00:27:54.035 }, 00:27:54.035 "secure_channel": true 00:27:54.035 } 00:27:54.035 } 00:27:54.035 ] 00:27:54.035 } 00:27:54.035 ] 00:27:54.035 }' 00:27:54.035 09:40:47 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:27:54.295 09:40:47 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:27:54.295 "subsystems": [ 00:27:54.295 { 00:27:54.295 "subsystem": "keyring", 00:27:54.295 "config": [ 00:27:54.295 { 00:27:54.295 "method": "keyring_file_add_key", 00:27:54.295 "params": { 00:27:54.295 "name": "key0", 00:27:54.295 "path": "/tmp/tmp.zY1a3MPwhN" 00:27:54.295 } 00:27:54.295 } 00:27:54.295 ] 00:27:54.295 }, 00:27:54.295 { 00:27:54.295 "subsystem": "iobuf", 00:27:54.295 "config": [ 00:27:54.295 { 00:27:54.295 "method": "iobuf_set_options", 00:27:54.295 "params": { 00:27:54.295 "small_pool_count": 8192, 00:27:54.295 "large_pool_count": 1024, 00:27:54.295 "small_bufsize": 8192, 00:27:54.295 "large_bufsize": 135168 00:27:54.295 } 00:27:54.295 } 00:27:54.295 ] 00:27:54.295 }, 00:27:54.295 { 00:27:54.296 "subsystem": "sock", 00:27:54.296 "config": [ 00:27:54.296 { 00:27:54.296 "method": "sock_impl_set_options", 00:27:54.296 "params": { 00:27:54.296 "impl_name": "posix", 00:27:54.296 "recv_buf_size": 2097152, 00:27:54.296 "send_buf_size": 2097152, 00:27:54.296 "enable_recv_pipe": true, 00:27:54.296 "enable_quickack": false, 00:27:54.296 "enable_placement_id": 0, 00:27:54.296 "enable_zerocopy_send_server": true, 00:27:54.296 "enable_zerocopy_send_client": false, 00:27:54.296 "zerocopy_threshold": 0, 00:27:54.296 "tls_version": 0, 00:27:54.296 "enable_ktls": false 00:27:54.296 } 00:27:54.296 }, 00:27:54.296 { 00:27:54.296 "method": "sock_impl_set_options", 00:27:54.296 "params": { 00:27:54.296 "impl_name": "ssl", 00:27:54.296 "recv_buf_size": 4096, 00:27:54.296 "send_buf_size": 4096, 00:27:54.296 "enable_recv_pipe": true, 00:27:54.296 "enable_quickack": false, 00:27:54.296 "enable_placement_id": 0, 00:27:54.296 "enable_zerocopy_send_server": true, 00:27:54.296 "enable_zerocopy_send_client": false, 00:27:54.296 "zerocopy_threshold": 0, 00:27:54.296 "tls_version": 0, 00:27:54.296 "enable_ktls": false 00:27:54.296 } 00:27:54.296 } 00:27:54.296 ] 00:27:54.296 }, 00:27:54.296 { 00:27:54.296 "subsystem": "vmd", 00:27:54.296 "config": [] 00:27:54.296 }, 00:27:54.296 { 00:27:54.296 "subsystem": "accel", 00:27:54.296 "config": [ 00:27:54.296 { 00:27:54.296 "method": "accel_set_options", 00:27:54.296 "params": { 00:27:54.296 "small_cache_size": 128, 00:27:54.296 "large_cache_size": 16, 00:27:54.296 "task_count": 2048, 00:27:54.296 "sequence_count": 2048, 00:27:54.296 "buf_count": 2048 00:27:54.296 } 00:27:54.296 } 00:27:54.296 ] 00:27:54.296 }, 00:27:54.296 { 00:27:54.296 "subsystem": "bdev", 00:27:54.296 "config": [ 00:27:54.296 { 00:27:54.296 "method": "bdev_set_options", 00:27:54.296 "params": { 00:27:54.296 "bdev_io_pool_size": 65535, 00:27:54.296 "bdev_io_cache_size": 256, 00:27:54.296 "bdev_auto_examine": true, 00:27:54.296 "iobuf_small_cache_size": 128, 00:27:54.296 "iobuf_large_cache_size": 16 00:27:54.296 } 00:27:54.296 }, 00:27:54.296 { 00:27:54.296 "method": "bdev_raid_set_options", 00:27:54.296 "params": { 00:27:54.296 "process_window_size_kb": 1024 00:27:54.296 } 00:27:54.296 }, 00:27:54.296 { 00:27:54.296 "method": "bdev_iscsi_set_options", 00:27:54.296 "params": { 00:27:54.296 "timeout_sec": 30 00:27:54.296 } 00:27:54.296 }, 00:27:54.296 { 00:27:54.296 "method": "bdev_nvme_set_options", 00:27:54.296 "params": { 00:27:54.296 "action_on_timeout": "none", 00:27:54.296 "timeout_us": 0, 00:27:54.296 "timeout_admin_us": 0, 00:27:54.296 "keep_alive_timeout_ms": 10000, 00:27:54.296 "arbitration_burst": 0, 00:27:54.296 "low_priority_weight": 0, 00:27:54.296 "medium_priority_weight": 0, 00:27:54.296 "high_priority_weight": 0, 00:27:54.296 "nvme_adminq_poll_period_us": 10000, 00:27:54.296 "nvme_ioq_poll_period_us": 0, 00:27:54.296 "io_queue_requests": 512, 00:27:54.296 "delay_cmd_submit": true, 00:27:54.296 "transport_retry_count": 4, 00:27:54.296 "bdev_retry_count": 3, 00:27:54.296 "transport_ack_timeout": 0, 00:27:54.296 "ctrlr_loss_timeout_sec": 0, 00:27:54.296 "reconnect_delay_sec": 0, 00:27:54.296 "fast_io_fail_timeout_sec": 0, 00:27:54.296 "disable_auto_failback": false, 00:27:54.296 "generate_uuids": false, 00:27:54.296 "transport_tos": 0, 00:27:54.296 "nvme_error_stat": false, 00:27:54.296 "rdma_srq_size": 0, 00:27:54.296 "io_path_stat": false, 00:27:54.296 "allow_accel_sequence": false, 00:27:54.296 "rdma_max_cq_size": 0, 00:27:54.296 "rdma_cm_event_timeout_ms": 0, 00:27:54.296 "dhchap_digests": [ 00:27:54.296 "sha256", 00:27:54.296 "sha384", 00:27:54.296 "sha512" 00:27:54.296 ], 00:27:54.296 "dhchap_dhgroups": [ 00:27:54.296 "null", 00:27:54.296 "ffdhe2048", 00:27:54.296 "ffdhe3072", 00:27:54.296 "ffdhe4096", 00:27:54.296 "ffdhe6144", 00:27:54.296 "ffdhe8192" 00:27:54.296 ] 00:27:54.296 } 00:27:54.296 }, 00:27:54.296 { 00:27:54.296 "method": "bdev_nvme_attach_controller", 00:27:54.296 "params": { 00:27:54.296 "name": "nvme0", 00:27:54.296 "trtype": "TCP", 00:27:54.296 "adrfam": "IPv4", 00:27:54.296 "traddr": "10.0.0.2", 00:27:54.296 "trsvcid": "4420", 00:27:54.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.296 "prchk_reftag": false, 00:27:54.296 "prchk_guard": false, 00:27:54.296 "ctrlr_loss_timeout_sec": 0, 00:27:54.296 "reconnect_delay_sec": 0, 00:27:54.296 "fast_io_fail_timeout_sec": 0, 00:27:54.296 "psk": "key0", 00:27:54.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:54.296 "hdgst": false, 00:27:54.296 "ddgst": false 00:27:54.296 } 00:27:54.296 }, 00:27:54.296 { 00:27:54.296 "method": "bdev_nvme_set_hotplug", 00:27:54.296 "params": { 00:27:54.296 "period_us": 100000, 00:27:54.296 "enable": false 00:27:54.296 } 00:27:54.296 }, 00:27:54.296 { 00:27:54.296 "method": "bdev_enable_histogram", 00:27:54.296 "params": { 00:27:54.296 "name": "nvme0n1", 00:27:54.296 "enable": true 00:27:54.296 } 00:27:54.296 }, 00:27:54.296 { 00:27:54.296 "method": "bdev_wait_for_examine" 00:27:54.296 } 00:27:54.296 ] 00:27:54.296 }, 00:27:54.296 { 00:27:54.296 "subsystem": "nbd", 00:27:54.296 "config": [] 00:27:54.296 } 00:27:54.296 ] 00:27:54.296 }' 00:27:54.296 09:40:47 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 391994 00:27:54.296 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 391994 ']' 00:27:54.296 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 391994 00:27:54.296 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:54.296 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:54.296 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 391994 00:27:54.296 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:54.296 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:54.296 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 391994' 00:27:54.296 killing process with pid 391994 00:27:54.296 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 391994 00:27:54.296 Received shutdown signal, test time was about 1.000000 seconds 00:27:54.296 00:27:54.296 Latency(us) 00:27:54.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.296 =================================================================================================================== 00:27:54.296 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:54.296 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 391994 00:27:54.296 09:40:47 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 391862 00:27:54.296 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 391862 ']' 00:27:54.296 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 391862 00:27:54.296 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:54.296 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:54.296 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 391862 00:27:54.557 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:54.557 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:54.557 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 391862' 00:27:54.557 killing process with pid 391862 00:27:54.557 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 391862 00:27:54.557 [2024-05-16 09:40:47.898540] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:54.557 09:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 391862 00:27:54.557 09:40:48 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:27:54.557 09:40:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:54.557 09:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:54.557 09:40:48 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:27:54.557 "subsystems": [ 00:27:54.557 { 00:27:54.557 "subsystem": "keyring", 00:27:54.557 "config": [ 00:27:54.557 { 00:27:54.557 "method": "keyring_file_add_key", 00:27:54.557 "params": { 00:27:54.557 "name": "key0", 00:27:54.557 "path": "/tmp/tmp.zY1a3MPwhN" 00:27:54.557 } 00:27:54.557 } 00:27:54.557 ] 00:27:54.557 }, 00:27:54.557 { 00:27:54.557 "subsystem": "iobuf", 00:27:54.557 "config": [ 00:27:54.557 { 00:27:54.557 "method": "iobuf_set_options", 00:27:54.557 "params": { 00:27:54.557 "small_pool_count": 8192, 00:27:54.557 "large_pool_count": 1024, 00:27:54.557 "small_bufsize": 8192, 00:27:54.557 "large_bufsize": 135168 00:27:54.557 } 00:27:54.557 } 00:27:54.557 ] 00:27:54.557 }, 00:27:54.557 { 00:27:54.557 "subsystem": "sock", 00:27:54.557 "config": [ 00:27:54.557 { 00:27:54.557 "method": "sock_impl_set_options", 00:27:54.557 "params": { 00:27:54.557 "impl_name": "posix", 00:27:54.557 "recv_buf_size": 2097152, 00:27:54.557 "send_buf_size": 2097152, 00:27:54.557 "enable_recv_pipe": true, 00:27:54.557 "enable_quickack": false, 00:27:54.557 "enable_placement_id": 0, 00:27:54.557 "enable_zerocopy_send_server": true, 00:27:54.557 "enable_zerocopy_send_client": false, 00:27:54.557 "zerocopy_threshold": 0, 00:27:54.557 "tls_version": 0, 00:27:54.557 "enable_ktls": false 00:27:54.557 } 00:27:54.557 }, 00:27:54.557 { 00:27:54.557 "method": "sock_impl_set_options", 00:27:54.557 "params": { 00:27:54.557 "impl_name": "ssl", 00:27:54.557 "recv_buf_size": 4096, 00:27:54.557 "send_buf_size": 4096, 00:27:54.557 "enable_recv_pipe": true, 00:27:54.557 "enable_quickack": false, 00:27:54.557 "enable_placement_id": 0, 00:27:54.557 "enable_zerocopy_send_server": true, 00:27:54.557 "enable_zerocopy_send_client": false, 00:27:54.557 "zerocopy_threshold": 0, 00:27:54.557 "tls_version": 0, 00:27:54.557 "enable_ktls": false 00:27:54.557 } 00:27:54.557 } 00:27:54.557 ] 00:27:54.557 }, 00:27:54.557 { 00:27:54.557 "subsystem": "vmd", 00:27:54.557 "config": [] 00:27:54.557 }, 00:27:54.557 { 00:27:54.557 "subsystem": "accel", 00:27:54.557 "config": [ 00:27:54.557 { 00:27:54.557 "method": "accel_set_options", 00:27:54.557 "params": { 00:27:54.557 "small_cache_size": 128, 00:27:54.557 "large_cache_size": 16, 00:27:54.557 "task_count": 2048, 00:27:54.557 "sequence_count": 2048, 00:27:54.557 "buf_count": 2048 00:27:54.557 } 00:27:54.557 } 00:27:54.557 ] 00:27:54.557 }, 00:27:54.557 { 00:27:54.557 "subsystem": "bdev", 00:27:54.557 "config": [ 00:27:54.557 { 00:27:54.557 "method": "bdev_set_options", 00:27:54.557 "params": { 00:27:54.557 "bdev_io_pool_size": 65535, 00:27:54.557 "bdev_io_cache_size": 256, 00:27:54.557 "bdev_auto_examine": true, 00:27:54.557 "iobuf_small_cache_size": 128, 00:27:54.557 "iobuf_large_cache_size": 16 00:27:54.557 } 00:27:54.557 }, 00:27:54.557 { 00:27:54.557 "method": "bdev_raid_set_options", 00:27:54.557 "params": { 00:27:54.557 "process_window_size_kb": 1024 00:27:54.557 } 00:27:54.557 }, 00:27:54.557 { 00:27:54.557 "method": "bdev_iscsi_set_options", 00:27:54.557 "params": { 00:27:54.557 "timeout_sec": 30 00:27:54.557 } 00:27:54.557 }, 00:27:54.557 { 00:27:54.557 "method": "bdev_nvme_set_options", 00:27:54.557 "params": { 00:27:54.557 "action_on_timeout": "none", 00:27:54.557 "timeout_us": 0, 00:27:54.557 "timeout_admin_us": 0, 00:27:54.557 "keep_alive_timeout_ms": 10000, 00:27:54.557 "arbitration_burst": 0, 00:27:54.557 "low_priority_weight": 0, 00:27:54.557 "medium_priority_weight": 0, 00:27:54.557 "high_priority_weight": 0, 00:27:54.557 "nvme_adminq_poll_period_us": 10000, 00:27:54.557 "nvme_ioq_poll_period_us": 0, 00:27:54.557 "io_queue_requests": 0, 00:27:54.558 "delay_cmd_submit": true, 00:27:54.558 "transport_retry_count": 4, 00:27:54.558 "bdev_retry_count": 3, 00:27:54.558 "transport_ack_timeout": 0, 00:27:54.558 "ctrlr_loss_timeout_sec": 0, 00:27:54.558 "reconnect_delay_sec": 0, 00:27:54.558 "fast_io_fail_timeout_sec": 0, 00:27:54.558 "disable_auto_failback": false, 00:27:54.558 "generate_uuids": false, 00:27:54.558 "transport_tos": 0, 00:27:54.558 "nvme_error_stat": false, 00:27:54.558 "rdma_srq_size": 0, 00:27:54.558 "io_path_stat": false, 00:27:54.558 "allow_accel_sequence": false, 00:27:54.558 "rdma_max_cq_size": 0, 00:27:54.558 "rdma_cm_event_timeout_ms": 0, 00:27:54.558 "dhchap_digests": [ 00:27:54.558 "sha256", 00:27:54.558 "sha384", 00:27:54.558 "sha512" 00:27:54.558 ], 00:27:54.558 "dhchap_dhgroups": [ 00:27:54.558 "null", 00:27:54.558 "ffdhe2048", 00:27:54.558 "ffdhe3072", 00:27:54.558 "ffdhe4096", 00:27:54.558 "ffdhe6144", 00:27:54.558 "ffdhe8192" 00:27:54.558 ] 00:27:54.558 } 00:27:54.558 }, 00:27:54.558 { 00:27:54.558 "method": "bdev_nvme_set_hotplug", 00:27:54.558 "params": { 00:27:54.558 "period_us": 100000, 00:27:54.558 "enable": false 00:27:54.558 } 00:27:54.558 }, 00:27:54.558 { 00:27:54.558 "method": "bdev_malloc_create", 00:27:54.558 "params": { 00:27:54.558 "name": "malloc0", 00:27:54.558 "num_blocks": 8192, 00:27:54.558 "block_size": 4096, 00:27:54.558 "physical_block_size": 4096, 00:27:54.558 "uuid": "d79beb19-f139-4665-81c5-b49efa653287", 00:27:54.558 "optimal_io_boundary": 0 00:27:54.558 } 00:27:54.558 }, 00:27:54.558 { 00:27:54.558 "method": "bdev_wait_for_examine" 00:27:54.558 } 00:27:54.558 ] 00:27:54.558 }, 00:27:54.558 { 00:27:54.558 "subsystem": "nbd", 00:27:54.558 "config": [] 00:27:54.558 }, 00:27:54.558 { 00:27:54.558 "subsystem": "scheduler", 00:27:54.558 "config": [ 00:27:54.558 { 00:27:54.558 "method": "framework_set_scheduler", 00:27:54.558 "params": { 00:27:54.558 "name": "static" 00:27:54.558 } 00:27:54.558 } 00:27:54.558 ] 00:27:54.558 }, 00:27:54.558 { 00:27:54.558 "subsystem": "nvmf", 00:27:54.558 "config": [ 00:27:54.558 { 00:27:54.558 "method": "nvmf_set_config", 00:27:54.558 "params": { 00:27:54.558 "discovery_filter": "match_any", 00:27:54.558 "admin_cmd_passthru": { 00:27:54.558 "identify_ctrlr": false 00:27:54.558 } 00:27:54.558 } 00:27:54.558 }, 00:27:54.558 { 00:27:54.558 "method": "nvmf_set_max_subsystems", 00:27:54.558 "params": { 00:27:54.558 "max_subsystems": 1024 00:27:54.558 } 00:27:54.558 }, 00:27:54.558 { 00:27:54.558 "method": "nvmf_set_crdt", 00:27:54.558 "params": { 00:27:54.558 "crdt1": 0, 00:27:54.558 "crdt2": 0, 00:27:54.558 "crdt3": 0 00:27:54.558 } 00:27:54.558 }, 00:27:54.558 { 00:27:54.558 "method": "nvmf_create_transport", 00:27:54.558 "params": { 00:27:54.558 "trtype": "TCP", 00:27:54.558 "max_queue_depth": 128, 00:27:54.558 "max_io_qpairs_per_ctrlr": 127, 00:27:54.558 "in_capsule_data_size": 4096, 00:27:54.558 "max_io_size": 131072, 00:27:54.558 "io_unit_size": 131072, 00:27:54.558 "max_aq_depth": 128, 00:27:54.558 "num_shared_buffers": 511, 00:27:54.558 "buf_cache_size": 4294967295, 00:27:54.558 "dif_insert_or_strip": false, 00:27:54.558 "zcopy": false, 00:27:54.558 "c2h_success": false, 00:27:54.558 "sock_priority": 0, 00:27:54.558 "abort_timeout_sec": 1, 00:27:54.558 "ack_timeout": 0, 00:27:54.558 "data_wr_pool_size": 0 00:27:54.558 } 00:27:54.558 }, 00:27:54.558 { 00:27:54.558 "method": "nvmf_create_subsystem", 00:27:54.558 "params": { 00:27:54.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.558 "allow_any_host": false, 00:27:54.558 "serial_number": "00000000000000000000", 00:27:54.558 "model_number": "SPDK bdev Controller", 00:27:54.558 "max_namespaces": 32, 00:27:54.558 "min_cntlid": 1, 00:27:54.558 "max_cntlid": 65519, 00:27:54.558 "ana_reporting": false 00:27:54.558 } 00:27:54.558 }, 00:27:54.558 { 00:27:54.558 "method": "nvmf_subsystem_add_host", 00:27:54.558 "params": { 00:27:54.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.558 "host": "nqn.2016-06.io.spdk:host1", 00:27:54.558 "psk": "key0" 00:27:54.558 } 00:27:54.558 }, 00:27:54.558 { 00:27:54.558 "method": "nvmf_subsystem_add_ns", 00:27:54.558 "params": { 00:27:54.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.558 "namespace": { 00:27:54.558 "nsid": 1, 00:27:54.558 "bdev_name": "malloc0", 00:27:54.558 "nguid": "D79BEB19F139466581C5B49EFA653287", 00:27:54.558 "uuid": "d79beb19-f139-4665-81c5-b49efa653287", 00:27:54.558 "no_auto_visible": false 00:27:54.558 } 00:27:54.558 } 00:27:54.558 }, 00:27:54.558 { 00:27:54.558 "method": "nvmf_subsystem_add_listener", 00:27:54.558 "params": { 00:27:54.558 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.558 "listen_address": { 00:27:54.558 "trtype": "TCP", 00:27:54.558 "adrfam": "IPv4", 00:27:54.558 "traddr": "10.0.0.2", 00:27:54.558 "trsvcid": "4420" 00:27:54.558 }, 00:27:54.558 "secure_channel": true 00:27:54.558 } 00:27:54.558 } 00:27:54.558 ] 00:27:54.558 } 00:27:54.558 ] 00:27:54.558 }' 00:27:54.558 09:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:54.558 09:40:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=392676 00:27:54.558 09:40:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 392676 00:27:54.558 09:40:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:27:54.558 09:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 392676 ']' 00:27:54.558 09:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.558 09:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:54.558 09:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.558 09:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:54.558 09:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:54.558 [2024-05-16 09:40:48.095890] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:54.558 [2024-05-16 09:40:48.095949] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.818 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.818 [2024-05-16 09:40:48.159448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.818 [2024-05-16 09:40:48.224873] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.818 [2024-05-16 09:40:48.224910] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.818 [2024-05-16 09:40:48.224917] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.818 [2024-05-16 09:40:48.224924] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.818 [2024-05-16 09:40:48.224929] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.818 [2024-05-16 09:40:48.224979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.078 [2024-05-16 09:40:48.413984] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.078 [2024-05-16 09:40:48.445976] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:55.078 [2024-05-16 09:40:48.446019] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:55.078 [2024-05-16 09:40:48.463351] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.339 09:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:55.339 09:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:55.339 09:40:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:55.339 09:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:55.339 09:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:55.600 09:40:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:55.600 09:40:48 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=392767 00:27:55.600 09:40:48 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 392767 /var/tmp/bdevperf.sock 00:27:55.600 09:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 392767 ']' 00:27:55.600 09:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:55.600 09:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:55.600 09:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:55.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:55.600 09:40:48 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:27:55.600 09:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:55.600 09:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:55.600 09:40:48 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:27:55.600 "subsystems": [ 00:27:55.600 { 00:27:55.600 "subsystem": "keyring", 00:27:55.600 "config": [ 00:27:55.600 { 00:27:55.600 "method": "keyring_file_add_key", 00:27:55.600 "params": { 00:27:55.600 "name": "key0", 00:27:55.600 "path": "/tmp/tmp.zY1a3MPwhN" 00:27:55.600 } 00:27:55.600 } 00:27:55.600 ] 00:27:55.600 }, 00:27:55.600 { 00:27:55.600 "subsystem": "iobuf", 00:27:55.600 "config": [ 00:27:55.600 { 00:27:55.600 "method": "iobuf_set_options", 00:27:55.600 "params": { 00:27:55.600 "small_pool_count": 8192, 00:27:55.600 "large_pool_count": 1024, 00:27:55.600 "small_bufsize": 8192, 00:27:55.600 "large_bufsize": 135168 00:27:55.600 } 00:27:55.600 } 00:27:55.600 ] 00:27:55.600 }, 00:27:55.600 { 00:27:55.600 "subsystem": "sock", 00:27:55.600 "config": [ 00:27:55.600 { 00:27:55.600 "method": "sock_impl_set_options", 00:27:55.600 "params": { 00:27:55.600 "impl_name": "posix", 00:27:55.600 "recv_buf_size": 2097152, 00:27:55.600 "send_buf_size": 2097152, 00:27:55.600 "enable_recv_pipe": true, 00:27:55.600 "enable_quickack": false, 00:27:55.600 "enable_placement_id": 0, 00:27:55.600 "enable_zerocopy_send_server": true, 00:27:55.600 "enable_zerocopy_send_client": false, 00:27:55.600 "zerocopy_threshold": 0, 00:27:55.600 "tls_version": 0, 00:27:55.600 "enable_ktls": false 00:27:55.600 } 00:27:55.600 }, 00:27:55.600 { 00:27:55.600 "method": "sock_impl_set_options", 00:27:55.600 "params": { 00:27:55.600 "impl_name": "ssl", 00:27:55.600 "recv_buf_size": 4096, 00:27:55.600 "send_buf_size": 4096, 00:27:55.600 "enable_recv_pipe": true, 00:27:55.600 "enable_quickack": false, 00:27:55.600 "enable_placement_id": 0, 00:27:55.600 "enable_zerocopy_send_server": true, 00:27:55.600 "enable_zerocopy_send_client": false, 00:27:55.600 "zerocopy_threshold": 0, 00:27:55.600 "tls_version": 0, 00:27:55.600 "enable_ktls": false 00:27:55.600 } 00:27:55.600 } 00:27:55.600 ] 00:27:55.600 }, 00:27:55.600 { 00:27:55.600 "subsystem": "vmd", 00:27:55.600 "config": [] 00:27:55.600 }, 00:27:55.600 { 00:27:55.600 "subsystem": "accel", 00:27:55.600 "config": [ 00:27:55.600 { 00:27:55.600 "method": "accel_set_options", 00:27:55.600 "params": { 00:27:55.600 "small_cache_size": 128, 00:27:55.600 "large_cache_size": 16, 00:27:55.600 "task_count": 2048, 00:27:55.600 "sequence_count": 2048, 00:27:55.600 "buf_count": 2048 00:27:55.600 } 00:27:55.600 } 00:27:55.600 ] 00:27:55.600 }, 00:27:55.600 { 00:27:55.600 "subsystem": "bdev", 00:27:55.600 "config": [ 00:27:55.600 { 00:27:55.600 "method": "bdev_set_options", 00:27:55.600 "params": { 00:27:55.600 "bdev_io_pool_size": 65535, 00:27:55.600 "bdev_io_cache_size": 256, 00:27:55.600 "bdev_auto_examine": true, 00:27:55.600 "iobuf_small_cache_size": 128, 00:27:55.600 "iobuf_large_cache_size": 16 00:27:55.600 } 00:27:55.600 }, 00:27:55.600 { 00:27:55.600 "method": "bdev_raid_set_options", 00:27:55.600 "params": { 00:27:55.600 "process_window_size_kb": 1024 00:27:55.600 } 00:27:55.600 }, 00:27:55.600 { 00:27:55.600 "method": "bdev_iscsi_set_options", 00:27:55.600 "params": { 00:27:55.600 "timeout_sec": 30 00:27:55.600 } 00:27:55.600 }, 00:27:55.600 { 00:27:55.600 "method": "bdev_nvme_set_options", 00:27:55.600 "params": { 00:27:55.600 "action_on_timeout": "none", 00:27:55.600 "timeout_us": 0, 00:27:55.600 "timeout_admin_us": 0, 00:27:55.600 "keep_alive_timeout_ms": 10000, 00:27:55.600 "arbitration_burst": 0, 00:27:55.600 "low_priority_weight": 0, 00:27:55.600 "medium_priority_weight": 0, 00:27:55.600 "high_priority_weight": 0, 00:27:55.600 "nvme_adminq_poll_period_us": 10000, 00:27:55.600 "nvme_ioq_poll_period_us": 0, 00:27:55.600 "io_queue_requests": 512, 00:27:55.600 "delay_cmd_submit": true, 00:27:55.600 "transport_retry_count": 4, 00:27:55.600 "bdev_retry_count": 3, 00:27:55.600 "transport_ack_timeout": 0, 00:27:55.600 "ctrlr_loss_timeout_sec": 0, 00:27:55.600 "reconnect_delay_sec": 0, 00:27:55.600 "fast_io_fail_timeout_sec": 0, 00:27:55.600 "disable_auto_failback": false, 00:27:55.600 "generate_uuids": false, 00:27:55.600 "transport_tos": 0, 00:27:55.600 "nvme_error_stat": false, 00:27:55.600 "rdma_srq_size": 0, 00:27:55.600 "io_path_stat": false, 00:27:55.600 "allow_accel_sequence": false, 00:27:55.600 "rdma_max_cq_size": 0, 00:27:55.600 "rdma_cm_event_timeout_ms": 0, 00:27:55.600 "dhchap_digests": [ 00:27:55.600 "sha256", 00:27:55.600 "sha384", 00:27:55.600 "sha512" 00:27:55.600 ], 00:27:55.600 "dhchap_dhgroups": [ 00:27:55.600 "null", 00:27:55.600 "ffdhe2048", 00:27:55.600 "ffdhe3072", 00:27:55.600 "ffdhe4096", 00:27:55.600 "ffdhe6144", 00:27:55.600 "ffdhe8192" 00:27:55.600 ] 00:27:55.600 } 00:27:55.600 }, 00:27:55.600 { 00:27:55.600 "method": "bdev_nvme_attach_controller", 00:27:55.600 "params": { 00:27:55.600 "name": "nvme0", 00:27:55.600 "trtype": "TCP", 00:27:55.600 "adrfam": "IPv4", 00:27:55.600 "traddr": "10.0.0.2", 00:27:55.600 "trsvcid": "4420", 00:27:55.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:55.600 "prchk_reftag": false, 00:27:55.600 "prchk_guard": false, 00:27:55.600 "ctrlr_loss_timeout_sec": 0, 00:27:55.600 "reconnect_delay_sec": 0, 00:27:55.600 "fast_io_fail_timeout_sec": 0, 00:27:55.600 "psk": "key0", 00:27:55.600 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:55.600 "hdgst": false, 00:27:55.600 "ddgst": false 00:27:55.600 } 00:27:55.600 }, 00:27:55.600 { 00:27:55.600 "method": "bdev_nvme_set_hotplug", 00:27:55.600 "params": { 00:27:55.600 "period_us": 100000, 00:27:55.600 "enable": false 00:27:55.600 } 00:27:55.600 }, 00:27:55.600 { 00:27:55.600 "method": "bdev_enable_histogram", 00:27:55.600 "params": { 00:27:55.600 "name": "nvme0n1", 00:27:55.600 "enable": true 00:27:55.600 } 00:27:55.600 }, 00:27:55.600 { 00:27:55.600 "method": "bdev_wait_for_examine" 00:27:55.600 } 00:27:55.600 ] 00:27:55.600 }, 00:27:55.600 { 00:27:55.600 "subsystem": "nbd", 00:27:55.600 "config": [] 00:27:55.600 } 00:27:55.600 ] 00:27:55.600 }' 00:27:55.600 [2024-05-16 09:40:48.948422] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:27:55.600 [2024-05-16 09:40:48.948471] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid392767 ] 00:27:55.600 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.600 [2024-05-16 09:40:49.021446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.600 [2024-05-16 09:40:49.075342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.861 [2024-05-16 09:40:49.201456] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:56.432 09:40:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:56.432 09:40:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:56.432 09:40:49 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:56.432 09:40:49 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:27:56.432 09:40:49 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.432 09:40:49 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:56.432 Running I/O for 1 seconds... 00:27:57.815 00:27:57.815 Latency(us) 00:27:57.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:57.815 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:57.815 Verification LBA range: start 0x0 length 0x2000 00:27:57.815 nvme0n1 : 1.02 5341.25 20.86 0.00 0.00 23790.95 5789.01 40632.32 00:27:57.815 =================================================================================================================== 00:27:57.815 Total : 5341.25 20.86 0.00 0.00 23790.95 5789.01 40632.32 00:27:57.815 0 00:27:57.816 09:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:27:57.816 09:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:27:57.816 09:40:50 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:27:57.816 09:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:27:57.816 09:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:27:57.816 09:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:27:57.816 09:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:27:57.816 09:40:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:27:57.816 nvmf_trace.0 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 392767 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 392767 ']' 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 392767 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 392767 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 392767' 00:27:57.816 killing process with pid 392767 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 392767 00:27:57.816 Received shutdown signal, test time was about 1.000000 seconds 00:27:57.816 00:27:57.816 Latency(us) 00:27:57.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:57.816 =================================================================================================================== 00:27:57.816 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 392767 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:57.816 rmmod nvme_tcp 00:27:57.816 rmmod nvme_fabrics 00:27:57.816 rmmod nvme_keyring 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 392676 ']' 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 392676 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 392676 ']' 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 392676 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:57.816 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 392676 00:27:58.076 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:58.076 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:58.076 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 392676' 00:27:58.076 killing process with pid 392676 00:27:58.076 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 392676 00:27:58.076 [2024-05-16 09:40:51.377304] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:58.076 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 392676 00:27:58.076 09:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:58.076 09:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:58.076 09:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:58.076 09:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:58.076 09:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:58.076 09:40:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.076 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:58.076 09:40:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.632 09:40:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:00.632 09:40:53 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.84cR5OgmdY /tmp/tmp.1v8gK86imo /tmp/tmp.zY1a3MPwhN 00:28:00.632 00:28:00.632 real 1m22.598s 00:28:00.632 user 2m10.009s 00:28:00.632 sys 0m23.954s 00:28:00.632 09:40:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:00.632 09:40:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:28:00.632 ************************************ 00:28:00.632 END TEST nvmf_tls 00:28:00.632 ************************************ 00:28:00.632 09:40:53 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:28:00.632 09:40:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:00.632 09:40:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:00.632 09:40:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:00.632 ************************************ 00:28:00.632 START TEST nvmf_fips 00:28:00.632 ************************************ 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:28:00.632 * Looking for test storage... 00:28:00.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:28:00.632 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:28:00.633 09:40:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:28:00.633 Error setting digest 00:28:00.633 00E29549A47F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:28:00.633 00E29549A47F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:28:00.633 09:40:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:28:07.222 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:07.222 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:28:07.222 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:07.222 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:07.222 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:07.222 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:07.222 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:07.222 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:28:07.222 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:07.484 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:07.484 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:07.484 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:07.484 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:07.484 09:41:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:07.484 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:07.745 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:07.745 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:07.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:07.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.773 ms 00:28:07.745 00:28:07.745 --- 10.0.0.2 ping statistics --- 00:28:07.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.745 rtt min/avg/max/mdev = 0.773/0.773/0.773/0.000 ms 00:28:07.745 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:07.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:07.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:28:07.746 00:28:07.746 --- 10.0.0.1 ping statistics --- 00:28:07.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.746 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=397408 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 397408 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 397408 ']' 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:07.746 09:41:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:28:07.746 [2024-05-16 09:41:01.186791] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:07.746 [2024-05-16 09:41:01.186846] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.746 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.746 [2024-05-16 09:41:01.271076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.006 [2024-05-16 09:41:01.361256] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.006 [2024-05-16 09:41:01.361311] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.006 [2024-05-16 09:41:01.361319] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.006 [2024-05-16 09:41:01.361327] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.006 [2024-05-16 09:41:01.361339] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.006 [2024-05-16 09:41:01.361362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.577 09:41:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:08.577 09:41:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:28:08.577 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:08.577 09:41:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:08.577 09:41:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:28:08.577 09:41:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.577 09:41:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:28:08.577 09:41:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:28:08.577 09:41:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:28:08.577 09:41:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:28:08.577 09:41:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:28:08.577 09:41:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:28:08.577 09:41:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:28:08.577 09:41:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:08.838 [2024-05-16 09:41:02.139974] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.838 [2024-05-16 09:41:02.155956] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:08.838 [2024-05-16 09:41:02.156018] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:08.838 [2024-05-16 09:41:02.156269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.838 [2024-05-16 09:41:02.186080] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:08.838 malloc0 00:28:08.838 09:41:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:08.838 09:41:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=397753 00:28:08.838 09:41:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 397753 /var/tmp/bdevperf.sock 00:28:08.838 09:41:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:28:08.838 09:41:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 397753 ']' 00:28:08.838 09:41:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:08.838 09:41:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:08.838 09:41:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:08.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:08.838 09:41:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:08.838 09:41:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:28:08.838 [2024-05-16 09:41:02.295109] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:08.838 [2024-05-16 09:41:02.295183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397753 ] 00:28:08.838 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.838 [2024-05-16 09:41:02.349814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.098 [2024-05-16 09:41:02.413795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.668 09:41:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:09.668 09:41:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:28:09.668 09:41:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:28:09.668 [2024-05-16 09:41:03.197516] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:09.668 [2024-05-16 09:41:03.197581] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:09.928 TLSTESTn1 00:28:09.928 09:41:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:09.928 Running I/O for 10 seconds... 00:28:19.920 00:28:19.920 Latency(us) 00:28:19.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.920 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:19.920 Verification LBA range: start 0x0 length 0x2000 00:28:19.920 TLSTESTn1 : 10.05 2115.77 8.26 0.00 0.00 60400.11 5816.32 178257.92 00:28:19.920 =================================================================================================================== 00:28:19.920 Total : 2115.77 8.26 0.00 0.00 60400.11 5816.32 178257.92 00:28:19.920 0 00:28:20.180 09:41:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:28:20.180 09:41:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:28:20.180 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:28:20.180 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:28:20.180 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:28:20.180 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:20.180 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:28:20.180 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:28:20.180 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:28:20.180 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:20.180 nvmf_trace.0 00:28:20.181 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:28:20.181 09:41:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 397753 00:28:20.181 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 397753 ']' 00:28:20.181 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 397753 00:28:20.181 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:28:20.181 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:20.181 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 397753 00:28:20.181 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:28:20.181 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:28:20.181 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 397753' 00:28:20.181 killing process with pid 397753 00:28:20.181 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 397753 00:28:20.181 Received shutdown signal, test time was about 10.000000 seconds 00:28:20.181 00:28:20.181 Latency(us) 00:28:20.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.181 =================================================================================================================== 00:28:20.181 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.181 [2024-05-16 09:41:13.635303] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:20.181 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 397753 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:20.442 rmmod nvme_tcp 00:28:20.442 rmmod nvme_fabrics 00:28:20.442 rmmod nvme_keyring 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 397408 ']' 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 397408 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 397408 ']' 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 397408 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 397408 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 397408' 00:28:20.442 killing process with pid 397408 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 397408 00:28:20.442 [2024-05-16 09:41:13.879437] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:20.442 [2024-05-16 09:41:13.879472] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 397408 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:20.442 09:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.987 09:41:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:22.987 09:41:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:28:22.987 00:28:22.987 real 0m22.394s 00:28:22.987 user 0m24.669s 00:28:22.987 sys 0m8.493s 00:28:22.987 09:41:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:22.987 09:41:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:28:22.987 ************************************ 00:28:22.987 END TEST nvmf_fips 00:28:22.987 ************************************ 00:28:22.987 09:41:16 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:28:22.987 09:41:16 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:28:22.987 09:41:16 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:28:22.987 09:41:16 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:28:22.987 09:41:16 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:28:22.987 09:41:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:29.616 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:29.616 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:29.616 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:29.616 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:28:29.616 09:41:22 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:29.616 09:41:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:29.616 09:41:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:29.616 09:41:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:29.616 ************************************ 00:28:29.616 START TEST nvmf_perf_adq 00:28:29.616 ************************************ 00:28:29.616 09:41:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:29.616 * Looking for test storage... 00:28:29.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:29.616 09:41:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:29.616 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:29.616 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.616 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:29.616 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:29.616 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:29.616 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:29.616 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:29.616 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:29.616 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:29.617 09:41:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:37.747 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:37.748 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:37.748 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:37.748 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:37.748 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:28:37.748 09:41:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:28:38.009 09:41:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:28:41.311 09:41:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:46.601 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:46.601 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:46.602 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:46.602 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:46.602 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:46.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:28:46.602 00:28:46.602 --- 10.0.0.2 ping statistics --- 00:28:46.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.602 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:28:46.602 00:28:46.602 --- 10.0.0.1 ping statistics --- 00:28:46.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.602 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=409645 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 409645 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 409645 ']' 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:46.602 09:41:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:46.602 [2024-05-16 09:41:39.773269] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:28:46.602 [2024-05-16 09:41:39.773330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.602 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.602 [2024-05-16 09:41:39.844294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:46.602 [2024-05-16 09:41:39.921156] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.602 [2024-05-16 09:41:39.921189] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.602 [2024-05-16 09:41:39.921198] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.602 [2024-05-16 09:41:39.921205] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.602 [2024-05-16 09:41:39.921210] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.602 [2024-05-16 09:41:39.921345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.602 [2024-05-16 09:41:39.921469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:46.602 [2024-05-16 09:41:39.921629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.602 [2024-05-16 09:41:39.921631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.176 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.176 [2024-05-16 09:41:40.734996] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.438 Malloc1 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.438 [2024-05-16 09:41:40.794119] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:47.438 [2024-05-16 09:41:40.794357] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=409995 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:28:47.438 09:41:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:47.438 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.353 09:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:28:49.353 09:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.354 09:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:49.354 09:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.354 09:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:28:49.354 "tick_rate": 2400000000, 00:28:49.354 "poll_groups": [ 00:28:49.354 { 00:28:49.354 "name": "nvmf_tgt_poll_group_000", 00:28:49.354 "admin_qpairs": 1, 00:28:49.354 "io_qpairs": 1, 00:28:49.354 "current_admin_qpairs": 1, 00:28:49.354 "current_io_qpairs": 1, 00:28:49.354 "pending_bdev_io": 0, 00:28:49.354 "completed_nvme_io": 20007, 00:28:49.354 "transports": [ 00:28:49.354 { 00:28:49.354 "trtype": "TCP" 00:28:49.354 } 00:28:49.354 ] 00:28:49.354 }, 00:28:49.354 { 00:28:49.354 "name": "nvmf_tgt_poll_group_001", 00:28:49.354 "admin_qpairs": 0, 00:28:49.354 "io_qpairs": 1, 00:28:49.354 "current_admin_qpairs": 0, 00:28:49.354 "current_io_qpairs": 1, 00:28:49.354 "pending_bdev_io": 0, 00:28:49.354 "completed_nvme_io": 27601, 00:28:49.354 "transports": [ 00:28:49.354 { 00:28:49.354 "trtype": "TCP" 00:28:49.354 } 00:28:49.354 ] 00:28:49.354 }, 00:28:49.354 { 00:28:49.354 "name": "nvmf_tgt_poll_group_002", 00:28:49.354 "admin_qpairs": 0, 00:28:49.354 "io_qpairs": 1, 00:28:49.354 "current_admin_qpairs": 0, 00:28:49.354 "current_io_qpairs": 1, 00:28:49.354 "pending_bdev_io": 0, 00:28:49.354 "completed_nvme_io": 20722, 00:28:49.354 "transports": [ 00:28:49.354 { 00:28:49.354 "trtype": "TCP" 00:28:49.354 } 00:28:49.354 ] 00:28:49.354 }, 00:28:49.354 { 00:28:49.354 "name": "nvmf_tgt_poll_group_003", 00:28:49.354 "admin_qpairs": 0, 00:28:49.354 "io_qpairs": 1, 00:28:49.354 "current_admin_qpairs": 0, 00:28:49.354 "current_io_qpairs": 1, 00:28:49.354 "pending_bdev_io": 0, 00:28:49.354 "completed_nvme_io": 20536, 00:28:49.354 "transports": [ 00:28:49.354 { 00:28:49.354 "trtype": "TCP" 00:28:49.354 } 00:28:49.354 ] 00:28:49.354 } 00:28:49.354 ] 00:28:49.354 }' 00:28:49.354 09:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:49.354 09:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:28:49.354 09:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:28:49.354 09:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:28:49.354 09:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 409995 00:28:57.498 Initializing NVMe Controllers 00:28:57.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:57.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:57.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:57.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:57.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:57.498 Initialization complete. Launching workers. 00:28:57.498 ======================================================== 00:28:57.498 Latency(us) 00:28:57.498 Device Information : IOPS MiB/s Average min max 00:28:57.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11487.80 44.87 5570.97 1451.83 8962.63 00:28:57.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14979.60 58.51 4272.07 1317.70 10216.05 00:28:57.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14239.00 55.62 4494.71 1055.57 10377.02 00:28:57.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13751.70 53.72 4653.78 1265.71 9855.54 00:28:57.498 ======================================================== 00:28:57.498 Total : 54458.10 212.73 4700.67 1055.57 10377.02 00:28:57.498 00:28:57.498 09:41:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:28:57.498 09:41:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:57.498 09:41:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:57.498 09:41:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:57.498 09:41:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:57.498 09:41:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:57.498 09:41:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:57.499 rmmod nvme_tcp 00:28:57.499 rmmod nvme_fabrics 00:28:57.499 rmmod nvme_keyring 00:28:57.499 09:41:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:57.499 09:41:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:57.499 09:41:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:57.499 09:41:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 409645 ']' 00:28:57.499 09:41:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 409645 00:28:57.499 09:41:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 409645 ']' 00:28:57.499 09:41:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 409645 00:28:57.499 09:41:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:28:57.499 09:41:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:57.499 09:41:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 409645 00:28:57.760 09:41:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:57.760 09:41:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:57.760 09:41:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 409645' 00:28:57.760 killing process with pid 409645 00:28:57.760 09:41:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 409645 00:28:57.760 [2024-05-16 09:41:51.088890] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:57.760 09:41:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 409645 00:28:57.760 09:41:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:57.760 09:41:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:57.760 09:41:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:57.760 09:41:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:57.760 09:41:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:57.760 09:41:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.760 09:41:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:57.760 09:41:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.309 09:41:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:00.309 09:41:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:29:00.309 09:41:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:29:01.694 09:41:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:29:03.611 09:41:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.902 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:08.903 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:08.903 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:08.903 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:08.903 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:08.903 09:42:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:08.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:29:08.903 00:29:08.903 --- 10.0.0.2 ping statistics --- 00:29:08.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.903 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:08.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:29:08.903 00:29:08.903 --- 10.0.0.1 ping statistics --- 00:29:08.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.903 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:29:08.903 net.core.busy_poll = 1 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:29:08.903 net.core.busy_read = 1 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:29:08.903 09:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:29:09.165 09:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:29:09.165 09:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:29:09.165 09:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:29:09.165 09:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:09.165 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:09.165 09:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:09.165 09:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:09.165 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=414710 00:29:09.165 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 414710 00:29:09.165 09:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 414710 ']' 00:29:09.165 09:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:09.165 09:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.165 09:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:09.165 09:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.165 09:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:09.165 09:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:09.165 [2024-05-16 09:42:02.682887] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:29:09.165 [2024-05-16 09:42:02.682952] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.165 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.427 [2024-05-16 09:42:02.754503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:09.427 [2024-05-16 09:42:02.829616] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.427 [2024-05-16 09:42:02.829653] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.427 [2024-05-16 09:42:02.829660] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.427 [2024-05-16 09:42:02.829667] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.427 [2024-05-16 09:42:02.829672] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.427 [2024-05-16 09:42:02.829812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.427 [2024-05-16 09:42:02.829938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:09.427 [2024-05-16 09:42:02.830098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.427 [2024-05-16 09:42:02.830099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:09.997 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:09.997 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:29:09.997 09:42:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:09.997 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.997 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:09.997 09:42:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.997 09:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:29:09.997 09:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:09.997 09:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:09.997 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.997 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:09.997 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.997 09:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:09.997 09:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:09.997 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.997 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:10.258 [2024-05-16 09:42:03.631319] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:10.258 Malloc1 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:10.258 [2024-05-16 09:42:03.690438] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:10.258 [2024-05-16 09:42:03.690663] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=414926 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:29:10.258 09:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:10.258 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.169 09:42:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:29:12.169 09:42:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.169 09:42:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:12.169 09:42:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.169 09:42:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:29:12.169 "tick_rate": 2400000000, 00:29:12.169 "poll_groups": [ 00:29:12.169 { 00:29:12.169 "name": "nvmf_tgt_poll_group_000", 00:29:12.169 "admin_qpairs": 1, 00:29:12.169 "io_qpairs": 2, 00:29:12.169 "current_admin_qpairs": 1, 00:29:12.169 "current_io_qpairs": 2, 00:29:12.169 "pending_bdev_io": 0, 00:29:12.169 "completed_nvme_io": 28336, 00:29:12.169 "transports": [ 00:29:12.169 { 00:29:12.169 "trtype": "TCP" 00:29:12.169 } 00:29:12.169 ] 00:29:12.169 }, 00:29:12.169 { 00:29:12.169 "name": "nvmf_tgt_poll_group_001", 00:29:12.169 "admin_qpairs": 0, 00:29:12.169 "io_qpairs": 2, 00:29:12.169 "current_admin_qpairs": 0, 00:29:12.169 "current_io_qpairs": 2, 00:29:12.169 "pending_bdev_io": 0, 00:29:12.169 "completed_nvme_io": 38507, 00:29:12.169 "transports": [ 00:29:12.169 { 00:29:12.169 "trtype": "TCP" 00:29:12.169 } 00:29:12.169 ] 00:29:12.169 }, 00:29:12.169 { 00:29:12.169 "name": "nvmf_tgt_poll_group_002", 00:29:12.169 "admin_qpairs": 0, 00:29:12.169 "io_qpairs": 0, 00:29:12.169 "current_admin_qpairs": 0, 00:29:12.169 "current_io_qpairs": 0, 00:29:12.169 "pending_bdev_io": 0, 00:29:12.169 "completed_nvme_io": 0, 00:29:12.169 "transports": [ 00:29:12.169 { 00:29:12.169 "trtype": "TCP" 00:29:12.169 } 00:29:12.169 ] 00:29:12.169 }, 00:29:12.169 { 00:29:12.169 "name": "nvmf_tgt_poll_group_003", 00:29:12.169 "admin_qpairs": 0, 00:29:12.169 "io_qpairs": 0, 00:29:12.169 "current_admin_qpairs": 0, 00:29:12.169 "current_io_qpairs": 0, 00:29:12.169 "pending_bdev_io": 0, 00:29:12.169 "completed_nvme_io": 0, 00:29:12.169 "transports": [ 00:29:12.169 { 00:29:12.169 "trtype": "TCP" 00:29:12.169 } 00:29:12.169 ] 00:29:12.169 } 00:29:12.169 ] 00:29:12.169 }' 00:29:12.169 09:42:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:12.169 09:42:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:29:12.430 09:42:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:29:12.430 09:42:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:29:12.430 09:42:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 414926 00:29:20.562 Initializing NVMe Controllers 00:29:20.562 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:20.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:20.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:20.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:20.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:20.562 Initialization complete. Launching workers. 00:29:20.562 ======================================================== 00:29:20.562 Latency(us) 00:29:20.562 Device Information : IOPS MiB/s Average min max 00:29:20.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12677.60 49.52 5060.87 991.09 50208.97 00:29:20.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7865.10 30.72 8159.80 1196.29 51085.33 00:29:20.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8550.90 33.40 7485.23 1285.03 52110.07 00:29:20.562 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10722.00 41.88 5995.09 1108.80 54679.23 00:29:20.562 ======================================================== 00:29:20.562 Total : 39815.59 155.53 6445.27 991.09 54679.23 00:29:20.562 00:29:20.563 09:42:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:29:20.563 09:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:20.563 09:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:29:20.563 09:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:20.563 09:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:29:20.563 09:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:20.563 09:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:20.563 rmmod nvme_tcp 00:29:20.563 rmmod nvme_fabrics 00:29:20.563 rmmod nvme_keyring 00:29:20.563 09:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:20.563 09:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:29:20.563 09:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:29:20.563 09:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 414710 ']' 00:29:20.563 09:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 414710 00:29:20.563 09:42:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 414710 ']' 00:29:20.563 09:42:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 414710 00:29:20.563 09:42:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:29:20.563 09:42:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:20.563 09:42:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 414710 00:29:20.563 09:42:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:20.563 09:42:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:20.563 09:42:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 414710' 00:29:20.563 killing process with pid 414710 00:29:20.563 09:42:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 414710 00:29:20.563 [2024-05-16 09:42:14.019000] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:20.563 09:42:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 414710 00:29:20.823 09:42:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:20.823 09:42:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:20.823 09:42:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:20.823 09:42:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:20.823 09:42:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:20.823 09:42:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.823 09:42:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:20.823 09:42:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.126 09:42:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:24.126 09:42:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:24.126 00:29:24.126 real 0m54.244s 00:29:24.126 user 2m49.801s 00:29:24.126 sys 0m11.381s 00:29:24.126 09:42:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:24.126 09:42:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:24.126 ************************************ 00:29:24.126 END TEST nvmf_perf_adq 00:29:24.126 ************************************ 00:29:24.126 09:42:17 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:24.126 09:42:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:24.126 09:42:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:24.126 09:42:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:24.126 ************************************ 00:29:24.126 START TEST nvmf_shutdown 00:29:24.126 ************************************ 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:24.126 * Looking for test storage... 00:29:24.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.126 09:42:17 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:24.127 ************************************ 00:29:24.127 START TEST nvmf_shutdown_tc1 00:29:24.127 ************************************ 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:24.127 09:42:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:30.713 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:30.713 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:30.713 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:30.713 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:30.713 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:30.713 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:30.713 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:30.713 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:29:30.713 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:30.713 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:29:30.713 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:30.714 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:30.714 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:30.714 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:30.714 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:30.714 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.975 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:30.975 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:30.975 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:30.975 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:30.975 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:30.975 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:30.975 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:30.975 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:31.236 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:31.236 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:31.236 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:31.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:31.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:29:31.236 00:29:31.236 --- 10.0.0.2 ping statistics --- 00:29:31.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.236 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:29:31.236 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:31.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:31.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:29:31.236 00:29:31.236 --- 10.0.0.1 ping statistics --- 00:29:31.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.236 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:29:31.236 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:31.236 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:29:31.236 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:31.236 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:31.237 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:31.237 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:31.237 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:31.237 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:31.237 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:31.237 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:31.237 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:31.237 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:31.237 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:31.237 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=421846 00:29:31.237 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 421846 00:29:31.237 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:31.237 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 421846 ']' 00:29:31.237 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.237 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:31.237 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.237 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:31.237 09:42:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:31.237 [2024-05-16 09:42:24.676514] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:29:31.237 [2024-05-16 09:42:24.676576] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.237 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.237 [2024-05-16 09:42:24.765555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:31.497 [2024-05-16 09:42:24.860401] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.497 [2024-05-16 09:42:24.860456] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.497 [2024-05-16 09:42:24.860465] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.497 [2024-05-16 09:42:24.860471] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.497 [2024-05-16 09:42:24.860477] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.497 [2024-05-16 09:42:24.860619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:31.497 [2024-05-16 09:42:24.860788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:31.497 [2024-05-16 09:42:24.860955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.497 [2024-05-16 09:42:24.860957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:32.068 [2024-05-16 09:42:25.507540] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:32.068 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:32.069 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:32.069 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:32.069 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:32.069 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:32.069 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:32.069 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:32.069 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:32.069 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:32.069 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:32.069 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:32.069 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:32.069 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:32.069 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:32.069 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:32.069 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:32.069 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.069 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:32.069 Malloc1 00:29:32.069 [2024-05-16 09:42:25.610884] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:32.069 [2024-05-16 09:42:25.611112] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.330 Malloc2 00:29:32.330 Malloc3 00:29:32.330 Malloc4 00:29:32.330 Malloc5 00:29:32.330 Malloc6 00:29:32.330 Malloc7 00:29:32.330 Malloc8 00:29:32.591 Malloc9 00:29:32.591 Malloc10 00:29:32.591 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.591 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:32.591 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:32.591 09:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=422222 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 422222 /var/tmp/bdevperf.sock 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 422222 ']' 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:32.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:32.591 { 00:29:32.591 "params": { 00:29:32.591 "name": "Nvme$subsystem", 00:29:32.591 "trtype": "$TEST_TRANSPORT", 00:29:32.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.591 "adrfam": "ipv4", 00:29:32.591 "trsvcid": "$NVMF_PORT", 00:29:32.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.591 "hdgst": ${hdgst:-false}, 00:29:32.591 "ddgst": ${ddgst:-false} 00:29:32.591 }, 00:29:32.591 "method": "bdev_nvme_attach_controller" 00:29:32.591 } 00:29:32.591 EOF 00:29:32.591 )") 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:32.591 { 00:29:32.591 "params": { 00:29:32.591 "name": "Nvme$subsystem", 00:29:32.591 "trtype": "$TEST_TRANSPORT", 00:29:32.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.591 "adrfam": "ipv4", 00:29:32.591 "trsvcid": "$NVMF_PORT", 00:29:32.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.591 "hdgst": ${hdgst:-false}, 00:29:32.591 "ddgst": ${ddgst:-false} 00:29:32.591 }, 00:29:32.591 "method": "bdev_nvme_attach_controller" 00:29:32.591 } 00:29:32.591 EOF 00:29:32.591 )") 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:32.591 { 00:29:32.591 "params": { 00:29:32.591 "name": "Nvme$subsystem", 00:29:32.591 "trtype": "$TEST_TRANSPORT", 00:29:32.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.591 "adrfam": "ipv4", 00:29:32.591 "trsvcid": "$NVMF_PORT", 00:29:32.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.591 "hdgst": ${hdgst:-false}, 00:29:32.591 "ddgst": ${ddgst:-false} 00:29:32.591 }, 00:29:32.591 "method": "bdev_nvme_attach_controller" 00:29:32.591 } 00:29:32.591 EOF 00:29:32.591 )") 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:32.591 { 00:29:32.591 "params": { 00:29:32.591 "name": "Nvme$subsystem", 00:29:32.591 "trtype": "$TEST_TRANSPORT", 00:29:32.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.591 "adrfam": "ipv4", 00:29:32.591 "trsvcid": "$NVMF_PORT", 00:29:32.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.591 "hdgst": ${hdgst:-false}, 00:29:32.591 "ddgst": ${ddgst:-false} 00:29:32.591 }, 00:29:32.591 "method": "bdev_nvme_attach_controller" 00:29:32.591 } 00:29:32.591 EOF 00:29:32.591 )") 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:32.591 { 00:29:32.591 "params": { 00:29:32.591 "name": "Nvme$subsystem", 00:29:32.591 "trtype": "$TEST_TRANSPORT", 00:29:32.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.591 "adrfam": "ipv4", 00:29:32.591 "trsvcid": "$NVMF_PORT", 00:29:32.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.591 "hdgst": ${hdgst:-false}, 00:29:32.591 "ddgst": ${ddgst:-false} 00:29:32.591 }, 00:29:32.591 "method": "bdev_nvme_attach_controller" 00:29:32.591 } 00:29:32.591 EOF 00:29:32.591 )") 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:32.591 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:32.592 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:32.592 { 00:29:32.592 "params": { 00:29:32.592 "name": "Nvme$subsystem", 00:29:32.592 "trtype": "$TEST_TRANSPORT", 00:29:32.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.592 "adrfam": "ipv4", 00:29:32.592 "trsvcid": "$NVMF_PORT", 00:29:32.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.592 "hdgst": ${hdgst:-false}, 00:29:32.592 "ddgst": ${ddgst:-false} 00:29:32.592 }, 00:29:32.592 "method": "bdev_nvme_attach_controller" 00:29:32.592 } 00:29:32.592 EOF 00:29:32.592 )") 00:29:32.592 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:32.592 [2024-05-16 09:42:26.057516] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:29:32.592 [2024-05-16 09:42:26.057568] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:32.592 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:32.592 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:32.592 { 00:29:32.592 "params": { 00:29:32.592 "name": "Nvme$subsystem", 00:29:32.592 "trtype": "$TEST_TRANSPORT", 00:29:32.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.592 "adrfam": "ipv4", 00:29:32.592 "trsvcid": "$NVMF_PORT", 00:29:32.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.592 "hdgst": ${hdgst:-false}, 00:29:32.592 "ddgst": ${ddgst:-false} 00:29:32.592 }, 00:29:32.592 "method": "bdev_nvme_attach_controller" 00:29:32.592 } 00:29:32.592 EOF 00:29:32.592 )") 00:29:32.592 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:32.592 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:32.592 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:32.592 { 00:29:32.592 "params": { 00:29:32.592 "name": "Nvme$subsystem", 00:29:32.592 "trtype": "$TEST_TRANSPORT", 00:29:32.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.592 "adrfam": "ipv4", 00:29:32.592 "trsvcid": "$NVMF_PORT", 00:29:32.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.592 "hdgst": ${hdgst:-false}, 00:29:32.592 "ddgst": ${ddgst:-false} 00:29:32.592 }, 00:29:32.592 "method": "bdev_nvme_attach_controller" 00:29:32.592 } 00:29:32.592 EOF 00:29:32.592 )") 00:29:32.592 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:32.592 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:32.592 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:32.592 { 00:29:32.592 "params": { 00:29:32.592 "name": "Nvme$subsystem", 00:29:32.592 "trtype": "$TEST_TRANSPORT", 00:29:32.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.592 "adrfam": "ipv4", 00:29:32.592 "trsvcid": "$NVMF_PORT", 00:29:32.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.592 "hdgst": ${hdgst:-false}, 00:29:32.592 "ddgst": ${ddgst:-false} 00:29:32.592 }, 00:29:32.592 "method": "bdev_nvme_attach_controller" 00:29:32.592 } 00:29:32.592 EOF 00:29:32.592 )") 00:29:32.592 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:32.592 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.592 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:32.592 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:32.592 { 00:29:32.592 "params": { 00:29:32.592 "name": "Nvme$subsystem", 00:29:32.592 "trtype": "$TEST_TRANSPORT", 00:29:32.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.592 "adrfam": "ipv4", 00:29:32.592 "trsvcid": "$NVMF_PORT", 00:29:32.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.592 "hdgst": ${hdgst:-false}, 00:29:32.592 "ddgst": ${ddgst:-false} 00:29:32.592 }, 00:29:32.592 "method": "bdev_nvme_attach_controller" 00:29:32.592 } 00:29:32.592 EOF 00:29:32.592 )") 00:29:32.592 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:32.592 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:32.592 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:32.592 09:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:32.592 "params": { 00:29:32.592 "name": "Nvme1", 00:29:32.592 "trtype": "tcp", 00:29:32.592 "traddr": "10.0.0.2", 00:29:32.592 "adrfam": "ipv4", 00:29:32.592 "trsvcid": "4420", 00:29:32.592 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:32.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:32.592 "hdgst": false, 00:29:32.592 "ddgst": false 00:29:32.592 }, 00:29:32.592 "method": "bdev_nvme_attach_controller" 00:29:32.592 },{ 00:29:32.592 "params": { 00:29:32.592 "name": "Nvme2", 00:29:32.592 "trtype": "tcp", 00:29:32.592 "traddr": "10.0.0.2", 00:29:32.592 "adrfam": "ipv4", 00:29:32.592 "trsvcid": "4420", 00:29:32.592 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:32.592 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:32.592 "hdgst": false, 00:29:32.592 "ddgst": false 00:29:32.592 }, 00:29:32.592 "method": "bdev_nvme_attach_controller" 00:29:32.592 },{ 00:29:32.592 "params": { 00:29:32.592 "name": "Nvme3", 00:29:32.592 "trtype": "tcp", 00:29:32.592 "traddr": "10.0.0.2", 00:29:32.592 "adrfam": "ipv4", 00:29:32.592 "trsvcid": "4420", 00:29:32.592 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:32.592 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:32.592 "hdgst": false, 00:29:32.592 "ddgst": false 00:29:32.592 }, 00:29:32.592 "method": "bdev_nvme_attach_controller" 00:29:32.592 },{ 00:29:32.592 "params": { 00:29:32.592 "name": "Nvme4", 00:29:32.592 "trtype": "tcp", 00:29:32.592 "traddr": "10.0.0.2", 00:29:32.592 "adrfam": "ipv4", 00:29:32.592 "trsvcid": "4420", 00:29:32.592 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:32.592 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:32.592 "hdgst": false, 00:29:32.592 "ddgst": false 00:29:32.592 }, 00:29:32.592 "method": "bdev_nvme_attach_controller" 00:29:32.592 },{ 00:29:32.592 "params": { 00:29:32.592 "name": "Nvme5", 00:29:32.592 "trtype": "tcp", 00:29:32.592 "traddr": "10.0.0.2", 00:29:32.592 "adrfam": "ipv4", 00:29:32.592 "trsvcid": "4420", 00:29:32.592 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:32.592 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:32.592 "hdgst": false, 00:29:32.592 "ddgst": false 00:29:32.592 }, 00:29:32.592 "method": "bdev_nvme_attach_controller" 00:29:32.592 },{ 00:29:32.592 "params": { 00:29:32.592 "name": "Nvme6", 00:29:32.592 "trtype": "tcp", 00:29:32.592 "traddr": "10.0.0.2", 00:29:32.592 "adrfam": "ipv4", 00:29:32.592 "trsvcid": "4420", 00:29:32.592 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:32.592 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:32.592 "hdgst": false, 00:29:32.592 "ddgst": false 00:29:32.592 }, 00:29:32.592 "method": "bdev_nvme_attach_controller" 00:29:32.592 },{ 00:29:32.592 "params": { 00:29:32.592 "name": "Nvme7", 00:29:32.592 "trtype": "tcp", 00:29:32.592 "traddr": "10.0.0.2", 00:29:32.592 "adrfam": "ipv4", 00:29:32.592 "trsvcid": "4420", 00:29:32.592 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:32.592 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:32.592 "hdgst": false, 00:29:32.592 "ddgst": false 00:29:32.592 }, 00:29:32.592 "method": "bdev_nvme_attach_controller" 00:29:32.592 },{ 00:29:32.592 "params": { 00:29:32.592 "name": "Nvme8", 00:29:32.592 "trtype": "tcp", 00:29:32.592 "traddr": "10.0.0.2", 00:29:32.592 "adrfam": "ipv4", 00:29:32.592 "trsvcid": "4420", 00:29:32.592 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:32.592 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:32.592 "hdgst": false, 00:29:32.592 "ddgst": false 00:29:32.592 }, 00:29:32.593 "method": "bdev_nvme_attach_controller" 00:29:32.593 },{ 00:29:32.593 "params": { 00:29:32.593 "name": "Nvme9", 00:29:32.593 "trtype": "tcp", 00:29:32.593 "traddr": "10.0.0.2", 00:29:32.593 "adrfam": "ipv4", 00:29:32.593 "trsvcid": "4420", 00:29:32.593 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:32.593 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:32.593 "hdgst": false, 00:29:32.593 "ddgst": false 00:29:32.593 }, 00:29:32.593 "method": "bdev_nvme_attach_controller" 00:29:32.593 },{ 00:29:32.593 "params": { 00:29:32.593 "name": "Nvme10", 00:29:32.593 "trtype": "tcp", 00:29:32.593 "traddr": "10.0.0.2", 00:29:32.593 "adrfam": "ipv4", 00:29:32.593 "trsvcid": "4420", 00:29:32.593 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:32.593 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:32.593 "hdgst": false, 00:29:32.593 "ddgst": false 00:29:32.593 }, 00:29:32.593 "method": "bdev_nvme_attach_controller" 00:29:32.593 }' 00:29:32.593 [2024-05-16 09:42:26.117569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.853 [2024-05-16 09:42:26.182575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.232 09:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:34.232 09:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:29:34.232 09:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:34.232 09:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.232 09:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:34.232 09:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.232 09:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 422222 00:29:34.232 09:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:29:34.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 422222 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:34.232 09:42:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:29:35.173 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 421846 00:29:35.173 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:35.173 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:35.173 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:35.173 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:35.173 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.173 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.173 { 00:29:35.173 "params": { 00:29:35.173 "name": "Nvme$subsystem", 00:29:35.173 "trtype": "$TEST_TRANSPORT", 00:29:35.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.173 "adrfam": "ipv4", 00:29:35.173 "trsvcid": "$NVMF_PORT", 00:29:35.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.174 "hdgst": ${hdgst:-false}, 00:29:35.174 "ddgst": ${ddgst:-false} 00:29:35.174 }, 00:29:35.174 "method": "bdev_nvme_attach_controller" 00:29:35.174 } 00:29:35.174 EOF 00:29:35.174 )") 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.174 { 00:29:35.174 "params": { 00:29:35.174 "name": "Nvme$subsystem", 00:29:35.174 "trtype": "$TEST_TRANSPORT", 00:29:35.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.174 "adrfam": "ipv4", 00:29:35.174 "trsvcid": "$NVMF_PORT", 00:29:35.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.174 "hdgst": ${hdgst:-false}, 00:29:35.174 "ddgst": ${ddgst:-false} 00:29:35.174 }, 00:29:35.174 "method": "bdev_nvme_attach_controller" 00:29:35.174 } 00:29:35.174 EOF 00:29:35.174 )") 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.174 { 00:29:35.174 "params": { 00:29:35.174 "name": "Nvme$subsystem", 00:29:35.174 "trtype": "$TEST_TRANSPORT", 00:29:35.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.174 "adrfam": "ipv4", 00:29:35.174 "trsvcid": "$NVMF_PORT", 00:29:35.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.174 "hdgst": ${hdgst:-false}, 00:29:35.174 "ddgst": ${ddgst:-false} 00:29:35.174 }, 00:29:35.174 "method": "bdev_nvme_attach_controller" 00:29:35.174 } 00:29:35.174 EOF 00:29:35.174 )") 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.174 { 00:29:35.174 "params": { 00:29:35.174 "name": "Nvme$subsystem", 00:29:35.174 "trtype": "$TEST_TRANSPORT", 00:29:35.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.174 "adrfam": "ipv4", 00:29:35.174 "trsvcid": "$NVMF_PORT", 00:29:35.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.174 "hdgst": ${hdgst:-false}, 00:29:35.174 "ddgst": ${ddgst:-false} 00:29:35.174 }, 00:29:35.174 "method": "bdev_nvme_attach_controller" 00:29:35.174 } 00:29:35.174 EOF 00:29:35.174 )") 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.174 { 00:29:35.174 "params": { 00:29:35.174 "name": "Nvme$subsystem", 00:29:35.174 "trtype": "$TEST_TRANSPORT", 00:29:35.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.174 "adrfam": "ipv4", 00:29:35.174 "trsvcid": "$NVMF_PORT", 00:29:35.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.174 "hdgst": ${hdgst:-false}, 00:29:35.174 "ddgst": ${ddgst:-false} 00:29:35.174 }, 00:29:35.174 "method": "bdev_nvme_attach_controller" 00:29:35.174 } 00:29:35.174 EOF 00:29:35.174 )") 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.174 { 00:29:35.174 "params": { 00:29:35.174 "name": "Nvme$subsystem", 00:29:35.174 "trtype": "$TEST_TRANSPORT", 00:29:35.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.174 "adrfam": "ipv4", 00:29:35.174 "trsvcid": "$NVMF_PORT", 00:29:35.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.174 "hdgst": ${hdgst:-false}, 00:29:35.174 "ddgst": ${ddgst:-false} 00:29:35.174 }, 00:29:35.174 "method": "bdev_nvme_attach_controller" 00:29:35.174 } 00:29:35.174 EOF 00:29:35.174 )") 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:35.174 [2024-05-16 09:42:28.661872] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:29:35.174 [2024-05-16 09:42:28.661920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422620 ] 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.174 { 00:29:35.174 "params": { 00:29:35.174 "name": "Nvme$subsystem", 00:29:35.174 "trtype": "$TEST_TRANSPORT", 00:29:35.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.174 "adrfam": "ipv4", 00:29:35.174 "trsvcid": "$NVMF_PORT", 00:29:35.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.174 "hdgst": ${hdgst:-false}, 00:29:35.174 "ddgst": ${ddgst:-false} 00:29:35.174 }, 00:29:35.174 "method": "bdev_nvme_attach_controller" 00:29:35.174 } 00:29:35.174 EOF 00:29:35.174 )") 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.174 { 00:29:35.174 "params": { 00:29:35.174 "name": "Nvme$subsystem", 00:29:35.174 "trtype": "$TEST_TRANSPORT", 00:29:35.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.174 "adrfam": "ipv4", 00:29:35.174 "trsvcid": "$NVMF_PORT", 00:29:35.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.174 "hdgst": ${hdgst:-false}, 00:29:35.174 "ddgst": ${ddgst:-false} 00:29:35.174 }, 00:29:35.174 "method": "bdev_nvme_attach_controller" 00:29:35.174 } 00:29:35.174 EOF 00:29:35.174 )") 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.174 { 00:29:35.174 "params": { 00:29:35.174 "name": "Nvme$subsystem", 00:29:35.174 "trtype": "$TEST_TRANSPORT", 00:29:35.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.174 "adrfam": "ipv4", 00:29:35.174 "trsvcid": "$NVMF_PORT", 00:29:35.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.174 "hdgst": ${hdgst:-false}, 00:29:35.174 "ddgst": ${ddgst:-false} 00:29:35.174 }, 00:29:35.174 "method": "bdev_nvme_attach_controller" 00:29:35.174 } 00:29:35.174 EOF 00:29:35.174 )") 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.174 { 00:29:35.174 "params": { 00:29:35.174 "name": "Nvme$subsystem", 00:29:35.174 "trtype": "$TEST_TRANSPORT", 00:29:35.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.174 "adrfam": "ipv4", 00:29:35.174 "trsvcid": "$NVMF_PORT", 00:29:35.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.174 "hdgst": ${hdgst:-false}, 00:29:35.174 "ddgst": ${ddgst:-false} 00:29:35.174 }, 00:29:35.174 "method": "bdev_nvme_attach_controller" 00:29:35.174 } 00:29:35.174 EOF 00:29:35.174 )") 00:29:35.174 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:35.174 09:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:35.174 "params": { 00:29:35.174 "name": "Nvme1", 00:29:35.174 "trtype": "tcp", 00:29:35.174 "traddr": "10.0.0.2", 00:29:35.174 "adrfam": "ipv4", 00:29:35.174 "trsvcid": "4420", 00:29:35.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:35.174 "hdgst": false, 00:29:35.174 "ddgst": false 00:29:35.174 }, 00:29:35.174 "method": "bdev_nvme_attach_controller" 00:29:35.174 },{ 00:29:35.174 "params": { 00:29:35.174 "name": "Nvme2", 00:29:35.174 "trtype": "tcp", 00:29:35.174 "traddr": "10.0.0.2", 00:29:35.174 "adrfam": "ipv4", 00:29:35.175 "trsvcid": "4420", 00:29:35.175 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:35.175 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:35.175 "hdgst": false, 00:29:35.175 "ddgst": false 00:29:35.175 }, 00:29:35.175 "method": "bdev_nvme_attach_controller" 00:29:35.175 },{ 00:29:35.175 "params": { 00:29:35.175 "name": "Nvme3", 00:29:35.175 "trtype": "tcp", 00:29:35.175 "traddr": "10.0.0.2", 00:29:35.175 "adrfam": "ipv4", 00:29:35.175 "trsvcid": "4420", 00:29:35.175 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:35.175 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:35.175 "hdgst": false, 00:29:35.175 "ddgst": false 00:29:35.175 }, 00:29:35.175 "method": "bdev_nvme_attach_controller" 00:29:35.175 },{ 00:29:35.175 "params": { 00:29:35.175 "name": "Nvme4", 00:29:35.175 "trtype": "tcp", 00:29:35.175 "traddr": "10.0.0.2", 00:29:35.175 "adrfam": "ipv4", 00:29:35.175 "trsvcid": "4420", 00:29:35.175 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:35.175 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:35.175 "hdgst": false, 00:29:35.175 "ddgst": false 00:29:35.175 }, 00:29:35.175 "method": "bdev_nvme_attach_controller" 00:29:35.175 },{ 00:29:35.175 "params": { 00:29:35.175 "name": "Nvme5", 00:29:35.175 "trtype": "tcp", 00:29:35.175 "traddr": "10.0.0.2", 00:29:35.175 "adrfam": "ipv4", 00:29:35.175 "trsvcid": "4420", 00:29:35.175 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:35.175 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:35.175 "hdgst": false, 00:29:35.175 "ddgst": false 00:29:35.175 }, 00:29:35.175 "method": "bdev_nvme_attach_controller" 00:29:35.175 },{ 00:29:35.175 "params": { 00:29:35.175 "name": "Nvme6", 00:29:35.175 "trtype": "tcp", 00:29:35.175 "traddr": "10.0.0.2", 00:29:35.175 "adrfam": "ipv4", 00:29:35.175 "trsvcid": "4420", 00:29:35.175 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:35.175 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:35.175 "hdgst": false, 00:29:35.175 "ddgst": false 00:29:35.175 }, 00:29:35.175 "method": "bdev_nvme_attach_controller" 00:29:35.175 },{ 00:29:35.175 "params": { 00:29:35.175 "name": "Nvme7", 00:29:35.175 "trtype": "tcp", 00:29:35.175 "traddr": "10.0.0.2", 00:29:35.175 "adrfam": "ipv4", 00:29:35.175 "trsvcid": "4420", 00:29:35.175 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:35.175 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:35.175 "hdgst": false, 00:29:35.175 "ddgst": false 00:29:35.175 }, 00:29:35.175 "method": "bdev_nvme_attach_controller" 00:29:35.175 },{ 00:29:35.175 "params": { 00:29:35.175 "name": "Nvme8", 00:29:35.175 "trtype": "tcp", 00:29:35.175 "traddr": "10.0.0.2", 00:29:35.175 "adrfam": "ipv4", 00:29:35.175 "trsvcid": "4420", 00:29:35.175 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:35.175 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:35.175 "hdgst": false, 00:29:35.175 "ddgst": false 00:29:35.175 }, 00:29:35.175 "method": "bdev_nvme_attach_controller" 00:29:35.175 },{ 00:29:35.175 "params": { 00:29:35.175 "name": "Nvme9", 00:29:35.175 "trtype": "tcp", 00:29:35.175 "traddr": "10.0.0.2", 00:29:35.175 "adrfam": "ipv4", 00:29:35.175 "trsvcid": "4420", 00:29:35.175 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:35.175 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:35.175 "hdgst": false, 00:29:35.175 "ddgst": false 00:29:35.175 }, 00:29:35.175 "method": "bdev_nvme_attach_controller" 00:29:35.175 },{ 00:29:35.175 "params": { 00:29:35.175 "name": "Nvme10", 00:29:35.175 "trtype": "tcp", 00:29:35.175 "traddr": "10.0.0.2", 00:29:35.175 "adrfam": "ipv4", 00:29:35.175 "trsvcid": "4420", 00:29:35.175 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:35.175 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:35.175 "hdgst": false, 00:29:35.175 "ddgst": false 00:29:35.175 }, 00:29:35.175 "method": "bdev_nvme_attach_controller" 00:29:35.175 }' 00:29:35.175 [2024-05-16 09:42:28.721944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.435 [2024-05-16 09:42:28.786023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.818 Running I/O for 1 seconds... 00:29:38.204 00:29:38.204 Latency(us) 00:29:38.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.204 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.204 Verification LBA range: start 0x0 length 0x400 00:29:38.204 Nvme1n1 : 1.05 182.15 11.38 0.00 0.00 347663.93 23483.73 277872.64 00:29:38.204 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.204 Verification LBA range: start 0x0 length 0x400 00:29:38.204 Nvme2n1 : 1.11 229.62 14.35 0.00 0.00 270904.43 13489.49 267386.88 00:29:38.204 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.204 Verification LBA range: start 0x0 length 0x400 00:29:38.204 Nvme3n1 : 1.20 266.37 16.65 0.00 0.00 229750.19 12615.68 251658.24 00:29:38.204 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.204 Verification LBA range: start 0x0 length 0x400 00:29:38.204 Nvme4n1 : 1.08 236.54 14.78 0.00 0.00 253153.49 21408.43 248162.99 00:29:38.204 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.204 Verification LBA range: start 0x0 length 0x400 00:29:38.204 Nvme5n1 : 1.15 226.60 14.16 0.00 0.00 260530.39 7372.80 248162.99 00:29:38.204 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.204 Verification LBA range: start 0x0 length 0x400 00:29:38.204 Nvme6n1 : 1.21 264.29 16.52 0.00 0.00 220142.42 20316.16 246415.36 00:29:38.204 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.204 Verification LBA range: start 0x0 length 0x400 00:29:38.204 Nvme7n1 : 1.21 264.57 16.54 0.00 0.00 215901.35 14308.69 241172.48 00:29:38.204 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.204 Verification LBA range: start 0x0 length 0x400 00:29:38.204 Nvme8n1 : 1.22 263.07 16.44 0.00 0.00 213632.00 9120.43 256901.12 00:29:38.204 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.204 Verification LBA range: start 0x0 length 0x400 00:29:38.204 Nvme9n1 : 1.22 261.67 16.35 0.00 0.00 211146.92 14854.83 249910.61 00:29:38.204 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.204 Verification LBA range: start 0x0 length 0x400 00:29:38.204 Nvme10n1 : 1.23 260.72 16.29 0.00 0.00 208100.01 14090.24 253405.87 00:29:38.204 =================================================================================================================== 00:29:38.204 Total : 2455.61 153.48 0.00 0.00 237248.10 7372.80 277872.64 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:38.204 rmmod nvme_tcp 00:29:38.204 rmmod nvme_fabrics 00:29:38.204 rmmod nvme_keyring 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 421846 ']' 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 421846 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 421846 ']' 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 421846 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 421846 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 421846' 00:29:38.204 killing process with pid 421846 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 421846 00:29:38.204 [2024-05-16 09:42:31.608206] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:38.204 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 421846 00:29:38.466 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:38.466 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:38.466 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:38.466 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:38.466 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:38.466 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.466 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:38.466 09:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.383 09:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:40.383 00:29:40.383 real 0m16.403s 00:29:40.383 user 0m34.102s 00:29:40.383 sys 0m6.311s 00:29:40.383 09:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:40.383 09:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:40.383 ************************************ 00:29:40.383 END TEST nvmf_shutdown_tc1 00:29:40.383 ************************************ 00:29:40.646 09:42:33 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:40.646 09:42:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:40.647 09:42:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:40.647 09:42:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:40.647 ************************************ 00:29:40.647 START TEST nvmf_shutdown_tc2 00:29:40.647 ************************************ 00:29:40.647 09:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:29:40.647 09:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:29:40.647 09:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:40.647 09:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:40.647 09:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.647 09:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:40.647 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:40.647 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:40.647 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.647 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:40.647 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:40.648 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:40.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:40.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:29:40.909 00:29:40.909 --- 10.0.0.2 ping statistics --- 00:29:40.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.909 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:40.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:40.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:29:40.909 00:29:40.909 --- 10.0.0.1 ping statistics --- 00:29:40.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.909 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=423942 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 423942 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 423942 ']' 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:40.909 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.910 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:40.910 09:42:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:40.910 [2024-05-16 09:42:34.415801] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:29:40.910 [2024-05-16 09:42:34.415858] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.910 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.171 [2024-05-16 09:42:34.500443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:41.171 [2024-05-16 09:42:34.556464] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.171 [2024-05-16 09:42:34.556492] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.171 [2024-05-16 09:42:34.556497] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.171 [2024-05-16 09:42:34.556502] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.171 [2024-05-16 09:42:34.556506] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.171 [2024-05-16 09:42:34.556613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:41.171 [2024-05-16 09:42:34.556777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:41.171 [2024-05-16 09:42:34.556932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.171 [2024-05-16 09:42:34.556934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:41.743 [2024-05-16 09:42:35.241386] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:41.743 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:42.004 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:42.004 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.004 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.004 Malloc1 00:29:42.004 [2024-05-16 09:42:35.339921] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:42.004 [2024-05-16 09:42:35.340108] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.004 Malloc2 00:29:42.004 Malloc3 00:29:42.004 Malloc4 00:29:42.004 Malloc5 00:29:42.004 Malloc6 00:29:42.004 Malloc7 00:29:42.265 Malloc8 00:29:42.265 Malloc9 00:29:42.265 Malloc10 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=424156 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 424156 /var/tmp/bdevperf.sock 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 424156 ']' 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:42.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:42.266 { 00:29:42.266 "params": { 00:29:42.266 "name": "Nvme$subsystem", 00:29:42.266 "trtype": "$TEST_TRANSPORT", 00:29:42.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.266 "adrfam": "ipv4", 00:29:42.266 "trsvcid": "$NVMF_PORT", 00:29:42.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.266 "hdgst": ${hdgst:-false}, 00:29:42.266 "ddgst": ${ddgst:-false} 00:29:42.266 }, 00:29:42.266 "method": "bdev_nvme_attach_controller" 00:29:42.266 } 00:29:42.266 EOF 00:29:42.266 )") 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:42.266 { 00:29:42.266 "params": { 00:29:42.266 "name": "Nvme$subsystem", 00:29:42.266 "trtype": "$TEST_TRANSPORT", 00:29:42.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.266 "adrfam": "ipv4", 00:29:42.266 "trsvcid": "$NVMF_PORT", 00:29:42.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.266 "hdgst": ${hdgst:-false}, 00:29:42.266 "ddgst": ${ddgst:-false} 00:29:42.266 }, 00:29:42.266 "method": "bdev_nvme_attach_controller" 00:29:42.266 } 00:29:42.266 EOF 00:29:42.266 )") 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:42.266 { 00:29:42.266 "params": { 00:29:42.266 "name": "Nvme$subsystem", 00:29:42.266 "trtype": "$TEST_TRANSPORT", 00:29:42.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.266 "adrfam": "ipv4", 00:29:42.266 "trsvcid": "$NVMF_PORT", 00:29:42.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.266 "hdgst": ${hdgst:-false}, 00:29:42.266 "ddgst": ${ddgst:-false} 00:29:42.266 }, 00:29:42.266 "method": "bdev_nvme_attach_controller" 00:29:42.266 } 00:29:42.266 EOF 00:29:42.266 )") 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:42.266 { 00:29:42.266 "params": { 00:29:42.266 "name": "Nvme$subsystem", 00:29:42.266 "trtype": "$TEST_TRANSPORT", 00:29:42.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.266 "adrfam": "ipv4", 00:29:42.266 "trsvcid": "$NVMF_PORT", 00:29:42.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.266 "hdgst": ${hdgst:-false}, 00:29:42.266 "ddgst": ${ddgst:-false} 00:29:42.266 }, 00:29:42.266 "method": "bdev_nvme_attach_controller" 00:29:42.266 } 00:29:42.266 EOF 00:29:42.266 )") 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:42.266 { 00:29:42.266 "params": { 00:29:42.266 "name": "Nvme$subsystem", 00:29:42.266 "trtype": "$TEST_TRANSPORT", 00:29:42.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.266 "adrfam": "ipv4", 00:29:42.266 "trsvcid": "$NVMF_PORT", 00:29:42.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.266 "hdgst": ${hdgst:-false}, 00:29:42.266 "ddgst": ${ddgst:-false} 00:29:42.266 }, 00:29:42.266 "method": "bdev_nvme_attach_controller" 00:29:42.266 } 00:29:42.266 EOF 00:29:42.266 )") 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:42.266 { 00:29:42.266 "params": { 00:29:42.266 "name": "Nvme$subsystem", 00:29:42.266 "trtype": "$TEST_TRANSPORT", 00:29:42.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.266 "adrfam": "ipv4", 00:29:42.266 "trsvcid": "$NVMF_PORT", 00:29:42.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.266 "hdgst": ${hdgst:-false}, 00:29:42.266 "ddgst": ${ddgst:-false} 00:29:42.266 }, 00:29:42.266 "method": "bdev_nvme_attach_controller" 00:29:42.266 } 00:29:42.266 EOF 00:29:42.266 )") 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:42.266 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:42.266 { 00:29:42.266 "params": { 00:29:42.266 "name": "Nvme$subsystem", 00:29:42.266 "trtype": "$TEST_TRANSPORT", 00:29:42.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.266 "adrfam": "ipv4", 00:29:42.266 "trsvcid": "$NVMF_PORT", 00:29:42.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.266 "hdgst": ${hdgst:-false}, 00:29:42.266 "ddgst": ${ddgst:-false} 00:29:42.266 }, 00:29:42.266 "method": "bdev_nvme_attach_controller" 00:29:42.266 } 00:29:42.266 EOF 00:29:42.266 )") 00:29:42.267 [2024-05-16 09:42:35.778598] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:29:42.267 [2024-05-16 09:42:35.778651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424156 ] 00:29:42.267 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:42.267 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:42.267 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:42.267 { 00:29:42.267 "params": { 00:29:42.267 "name": "Nvme$subsystem", 00:29:42.267 "trtype": "$TEST_TRANSPORT", 00:29:42.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.267 "adrfam": "ipv4", 00:29:42.267 "trsvcid": "$NVMF_PORT", 00:29:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.267 "hdgst": ${hdgst:-false}, 00:29:42.267 "ddgst": ${ddgst:-false} 00:29:42.267 }, 00:29:42.267 "method": "bdev_nvme_attach_controller" 00:29:42.267 } 00:29:42.267 EOF 00:29:42.267 )") 00:29:42.267 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:42.267 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:42.267 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:42.267 { 00:29:42.267 "params": { 00:29:42.267 "name": "Nvme$subsystem", 00:29:42.267 "trtype": "$TEST_TRANSPORT", 00:29:42.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.267 "adrfam": "ipv4", 00:29:42.267 "trsvcid": "$NVMF_PORT", 00:29:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.267 "hdgst": ${hdgst:-false}, 00:29:42.267 "ddgst": ${ddgst:-false} 00:29:42.267 }, 00:29:42.267 "method": "bdev_nvme_attach_controller" 00:29:42.267 } 00:29:42.267 EOF 00:29:42.267 )") 00:29:42.267 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:42.267 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:42.267 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:42.267 { 00:29:42.267 "params": { 00:29:42.267 "name": "Nvme$subsystem", 00:29:42.267 "trtype": "$TEST_TRANSPORT", 00:29:42.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.267 "adrfam": "ipv4", 00:29:42.267 "trsvcid": "$NVMF_PORT", 00:29:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.267 "hdgst": ${hdgst:-false}, 00:29:42.267 "ddgst": ${ddgst:-false} 00:29:42.267 }, 00:29:42.267 "method": "bdev_nvme_attach_controller" 00:29:42.267 } 00:29:42.267 EOF 00:29:42.267 )") 00:29:42.267 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:42.267 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.267 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:29:42.267 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:29:42.267 09:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:42.267 "params": { 00:29:42.267 "name": "Nvme1", 00:29:42.267 "trtype": "tcp", 00:29:42.267 "traddr": "10.0.0.2", 00:29:42.267 "adrfam": "ipv4", 00:29:42.267 "trsvcid": "4420", 00:29:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:42.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:42.267 "hdgst": false, 00:29:42.267 "ddgst": false 00:29:42.267 }, 00:29:42.267 "method": "bdev_nvme_attach_controller" 00:29:42.267 },{ 00:29:42.267 "params": { 00:29:42.267 "name": "Nvme2", 00:29:42.267 "trtype": "tcp", 00:29:42.267 "traddr": "10.0.0.2", 00:29:42.267 "adrfam": "ipv4", 00:29:42.267 "trsvcid": "4420", 00:29:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:42.267 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:42.267 "hdgst": false, 00:29:42.267 "ddgst": false 00:29:42.267 }, 00:29:42.267 "method": "bdev_nvme_attach_controller" 00:29:42.267 },{ 00:29:42.267 "params": { 00:29:42.267 "name": "Nvme3", 00:29:42.267 "trtype": "tcp", 00:29:42.267 "traddr": "10.0.0.2", 00:29:42.267 "adrfam": "ipv4", 00:29:42.267 "trsvcid": "4420", 00:29:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:42.267 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:42.267 "hdgst": false, 00:29:42.267 "ddgst": false 00:29:42.267 }, 00:29:42.267 "method": "bdev_nvme_attach_controller" 00:29:42.267 },{ 00:29:42.267 "params": { 00:29:42.267 "name": "Nvme4", 00:29:42.267 "trtype": "tcp", 00:29:42.267 "traddr": "10.0.0.2", 00:29:42.267 "adrfam": "ipv4", 00:29:42.267 "trsvcid": "4420", 00:29:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:42.267 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:42.267 "hdgst": false, 00:29:42.267 "ddgst": false 00:29:42.267 }, 00:29:42.267 "method": "bdev_nvme_attach_controller" 00:29:42.267 },{ 00:29:42.267 "params": { 00:29:42.267 "name": "Nvme5", 00:29:42.267 "trtype": "tcp", 00:29:42.267 "traddr": "10.0.0.2", 00:29:42.267 "adrfam": "ipv4", 00:29:42.267 "trsvcid": "4420", 00:29:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:42.267 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:42.267 "hdgst": false, 00:29:42.267 "ddgst": false 00:29:42.267 }, 00:29:42.267 "method": "bdev_nvme_attach_controller" 00:29:42.267 },{ 00:29:42.267 "params": { 00:29:42.267 "name": "Nvme6", 00:29:42.267 "trtype": "tcp", 00:29:42.267 "traddr": "10.0.0.2", 00:29:42.267 "adrfam": "ipv4", 00:29:42.267 "trsvcid": "4420", 00:29:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:42.267 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:42.267 "hdgst": false, 00:29:42.267 "ddgst": false 00:29:42.267 }, 00:29:42.267 "method": "bdev_nvme_attach_controller" 00:29:42.267 },{ 00:29:42.267 "params": { 00:29:42.267 "name": "Nvme7", 00:29:42.267 "trtype": "tcp", 00:29:42.267 "traddr": "10.0.0.2", 00:29:42.267 "adrfam": "ipv4", 00:29:42.267 "trsvcid": "4420", 00:29:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:42.267 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:42.267 "hdgst": false, 00:29:42.267 "ddgst": false 00:29:42.267 }, 00:29:42.267 "method": "bdev_nvme_attach_controller" 00:29:42.267 },{ 00:29:42.267 "params": { 00:29:42.267 "name": "Nvme8", 00:29:42.267 "trtype": "tcp", 00:29:42.267 "traddr": "10.0.0.2", 00:29:42.267 "adrfam": "ipv4", 00:29:42.267 "trsvcid": "4420", 00:29:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:42.267 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:42.267 "hdgst": false, 00:29:42.267 "ddgst": false 00:29:42.267 }, 00:29:42.267 "method": "bdev_nvme_attach_controller" 00:29:42.267 },{ 00:29:42.267 "params": { 00:29:42.267 "name": "Nvme9", 00:29:42.267 "trtype": "tcp", 00:29:42.267 "traddr": "10.0.0.2", 00:29:42.267 "adrfam": "ipv4", 00:29:42.267 "trsvcid": "4420", 00:29:42.267 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:42.267 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:42.267 "hdgst": false, 00:29:42.267 "ddgst": false 00:29:42.267 }, 00:29:42.267 "method": "bdev_nvme_attach_controller" 00:29:42.267 },{ 00:29:42.267 "params": { 00:29:42.268 "name": "Nvme10", 00:29:42.268 "trtype": "tcp", 00:29:42.268 "traddr": "10.0.0.2", 00:29:42.268 "adrfam": "ipv4", 00:29:42.268 "trsvcid": "4420", 00:29:42.268 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:42.268 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:42.268 "hdgst": false, 00:29:42.268 "ddgst": false 00:29:42.268 }, 00:29:42.268 "method": "bdev_nvme_attach_controller" 00:29:42.268 }' 00:29:42.529 [2024-05-16 09:42:35.838077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.529 [2024-05-16 09:42:35.903098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.912 Running I/O for 10 seconds... 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:29:43.912 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:44.172 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:44.172 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:44.172 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:44.172 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:44.172 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.172 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:44.432 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.432 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:29:44.432 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:29:44.432 09:42:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 424156 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 424156 ']' 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 424156 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 424156 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 424156' 00:29:44.693 killing process with pid 424156 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 424156 00:29:44.693 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 424156 00:29:44.693 Received shutdown signal, test time was about 0.991103 seconds 00:29:44.693 00:29:44.693 Latency(us) 00:29:44.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.693 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.693 Verification LBA range: start 0x0 length 0x400 00:29:44.693 Nvme1n1 : 0.96 200.97 12.56 0.00 0.00 314855.82 18350.08 248162.99 00:29:44.693 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.693 Verification LBA range: start 0x0 length 0x400 00:29:44.693 Nvme2n1 : 0.99 259.35 16.21 0.00 0.00 238549.33 15073.28 234181.97 00:29:44.693 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.693 Verification LBA range: start 0x0 length 0x400 00:29:44.693 Nvme3n1 : 0.98 260.68 16.29 0.00 0.00 233249.71 20425.39 242920.11 00:29:44.693 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.693 Verification LBA range: start 0x0 length 0x400 00:29:44.693 Nvme4n1 : 0.98 261.53 16.35 0.00 0.00 227676.80 20971.52 241172.48 00:29:44.693 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.693 Verification LBA range: start 0x0 length 0x400 00:29:44.693 Nvme5n1 : 0.98 260.01 16.25 0.00 0.00 223984.43 19770.03 286610.77 00:29:44.693 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.693 Verification LBA range: start 0x0 length 0x400 00:29:44.693 Nvme6n1 : 0.97 267.29 16.71 0.00 0.00 212542.70 4450.99 244667.73 00:29:44.693 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.693 Verification LBA range: start 0x0 length 0x400 00:29:44.693 Nvme7n1 : 0.98 262.33 16.40 0.00 0.00 212632.75 19770.03 221074.77 00:29:44.693 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.693 Verification LBA range: start 0x0 length 0x400 00:29:44.693 Nvme8n1 : 0.99 258.53 16.16 0.00 0.00 211522.56 15947.09 260396.37 00:29:44.693 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.693 Verification LBA range: start 0x0 length 0x400 00:29:44.693 Nvme9n1 : 0.96 199.55 12.47 0.00 0.00 266371.13 18786.99 248162.99 00:29:44.693 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:44.693 Verification LBA range: start 0x0 length 0x400 00:29:44.693 Nvme10n1 : 0.97 198.05 12.38 0.00 0.00 262140.02 14417.92 263891.63 00:29:44.693 =================================================================================================================== 00:29:44.693 Total : 2428.29 151.77 0.00 0.00 237005.54 4450.99 286610.77 00:29:44.976 09:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:29:45.918 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 423942 00:29:45.918 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:29:45.918 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:45.918 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:45.918 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:45.918 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:45.918 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:45.919 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:29:45.919 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:45.919 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:29:45.919 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:45.919 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:45.919 rmmod nvme_tcp 00:29:45.919 rmmod nvme_fabrics 00:29:45.919 rmmod nvme_keyring 00:29:45.919 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:45.919 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:29:45.919 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:29:45.919 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 423942 ']' 00:29:45.919 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 423942 00:29:45.919 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 423942 ']' 00:29:45.919 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 423942 00:29:45.919 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:29:45.919 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:45.919 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 423942 00:29:46.179 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:46.179 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:46.179 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 423942' 00:29:46.179 killing process with pid 423942 00:29:46.179 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 423942 00:29:46.179 [2024-05-16 09:42:39.514173] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:46.179 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 423942 00:29:46.441 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:46.441 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:46.441 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:46.441 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:46.441 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:46.441 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.441 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:46.442 09:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:48.356 00:29:48.356 real 0m7.817s 00:29:48.356 user 0m23.507s 00:29:48.356 sys 0m1.221s 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.356 ************************************ 00:29:48.356 END TEST nvmf_shutdown_tc2 00:29:48.356 ************************************ 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:48.356 ************************************ 00:29:48.356 START TEST nvmf_shutdown_tc3 00:29:48.356 ************************************ 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.356 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:48.357 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:48.357 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:48.357 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:48.617 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:48.617 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.617 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:48.618 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.618 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.618 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:48.618 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:48.618 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.618 09:42:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.618 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.618 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.618 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:48.618 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.618 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.887 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.887 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:48.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:29:48.887 00:29:48.887 --- 10.0.0.2 ping statistics --- 00:29:48.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.887 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:29:48.887 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:29:48.887 00:29:48.888 --- 10.0.0.1 ping statistics --- 00:29:48.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.888 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=425563 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 425563 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 425563 ']' 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:48.888 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:48.888 [2024-05-16 09:42:42.283373] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:29:48.888 [2024-05-16 09:42:42.283420] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.888 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.888 [2024-05-16 09:42:42.360068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:48.888 [2024-05-16 09:42:42.416198] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.888 [2024-05-16 09:42:42.416227] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.888 [2024-05-16 09:42:42.416233] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.888 [2024-05-16 09:42:42.416237] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.888 [2024-05-16 09:42:42.416241] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.888 [2024-05-16 09:42:42.416337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.888 [2024-05-16 09:42:42.416493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:48.888 [2024-05-16 09:42:42.416621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.888 [2024-05-16 09:42:42.416624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.150 [2024-05-16 09:42:42.539524] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.150 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.151 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.151 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.151 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.151 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.151 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.151 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.151 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:49.151 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:49.151 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:49.151 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.151 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.151 Malloc1 00:29:49.151 [2024-05-16 09:42:42.638192] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:49.151 [2024-05-16 09:42:42.638399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.151 Malloc2 00:29:49.151 Malloc3 00:29:49.411 Malloc4 00:29:49.411 Malloc5 00:29:49.411 Malloc6 00:29:49.411 Malloc7 00:29:49.411 Malloc8 00:29:49.411 Malloc9 00:29:49.411 Malloc10 00:29:49.674 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.674 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:49.674 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:49.674 09:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=425848 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 425848 /var/tmp/bdevperf.sock 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 425848 ']' 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:49.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.674 { 00:29:49.674 "params": { 00:29:49.674 "name": "Nvme$subsystem", 00:29:49.674 "trtype": "$TEST_TRANSPORT", 00:29:49.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.674 "adrfam": "ipv4", 00:29:49.674 "trsvcid": "$NVMF_PORT", 00:29:49.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.674 "hdgst": ${hdgst:-false}, 00:29:49.674 "ddgst": ${ddgst:-false} 00:29:49.674 }, 00:29:49.674 "method": "bdev_nvme_attach_controller" 00:29:49.674 } 00:29:49.674 EOF 00:29:49.674 )") 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.674 { 00:29:49.674 "params": { 00:29:49.674 "name": "Nvme$subsystem", 00:29:49.674 "trtype": "$TEST_TRANSPORT", 00:29:49.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.674 "adrfam": "ipv4", 00:29:49.674 "trsvcid": "$NVMF_PORT", 00:29:49.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.674 "hdgst": ${hdgst:-false}, 00:29:49.674 "ddgst": ${ddgst:-false} 00:29:49.674 }, 00:29:49.674 "method": "bdev_nvme_attach_controller" 00:29:49.674 } 00:29:49.674 EOF 00:29:49.674 )") 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.674 { 00:29:49.674 "params": { 00:29:49.674 "name": "Nvme$subsystem", 00:29:49.674 "trtype": "$TEST_TRANSPORT", 00:29:49.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.674 "adrfam": "ipv4", 00:29:49.674 "trsvcid": "$NVMF_PORT", 00:29:49.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.674 "hdgst": ${hdgst:-false}, 00:29:49.674 "ddgst": ${ddgst:-false} 00:29:49.674 }, 00:29:49.674 "method": "bdev_nvme_attach_controller" 00:29:49.674 } 00:29:49.674 EOF 00:29:49.674 )") 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.674 { 00:29:49.674 "params": { 00:29:49.674 "name": "Nvme$subsystem", 00:29:49.674 "trtype": "$TEST_TRANSPORT", 00:29:49.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.674 "adrfam": "ipv4", 00:29:49.674 "trsvcid": "$NVMF_PORT", 00:29:49.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.674 "hdgst": ${hdgst:-false}, 00:29:49.674 "ddgst": ${ddgst:-false} 00:29:49.674 }, 00:29:49.674 "method": "bdev_nvme_attach_controller" 00:29:49.674 } 00:29:49.674 EOF 00:29:49.674 )") 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.674 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.674 { 00:29:49.674 "params": { 00:29:49.674 "name": "Nvme$subsystem", 00:29:49.674 "trtype": "$TEST_TRANSPORT", 00:29:49.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.674 "adrfam": "ipv4", 00:29:49.674 "trsvcid": "$NVMF_PORT", 00:29:49.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.675 "hdgst": ${hdgst:-false}, 00:29:49.675 "ddgst": ${ddgst:-false} 00:29:49.675 }, 00:29:49.675 "method": "bdev_nvme_attach_controller" 00:29:49.675 } 00:29:49.675 EOF 00:29:49.675 )") 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.675 { 00:29:49.675 "params": { 00:29:49.675 "name": "Nvme$subsystem", 00:29:49.675 "trtype": "$TEST_TRANSPORT", 00:29:49.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.675 "adrfam": "ipv4", 00:29:49.675 "trsvcid": "$NVMF_PORT", 00:29:49.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.675 "hdgst": ${hdgst:-false}, 00:29:49.675 "ddgst": ${ddgst:-false} 00:29:49.675 }, 00:29:49.675 "method": "bdev_nvme_attach_controller" 00:29:49.675 } 00:29:49.675 EOF 00:29:49.675 )") 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.675 { 00:29:49.675 "params": { 00:29:49.675 "name": "Nvme$subsystem", 00:29:49.675 "trtype": "$TEST_TRANSPORT", 00:29:49.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.675 "adrfam": "ipv4", 00:29:49.675 "trsvcid": "$NVMF_PORT", 00:29:49.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.675 "hdgst": ${hdgst:-false}, 00:29:49.675 "ddgst": ${ddgst:-false} 00:29:49.675 }, 00:29:49.675 "method": "bdev_nvme_attach_controller" 00:29:49.675 } 00:29:49.675 EOF 00:29:49.675 )") 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.675 [2024-05-16 09:42:43.093827] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:29:49.675 [2024-05-16 09:42:43.093882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425848 ] 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.675 { 00:29:49.675 "params": { 00:29:49.675 "name": "Nvme$subsystem", 00:29:49.675 "trtype": "$TEST_TRANSPORT", 00:29:49.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.675 "adrfam": "ipv4", 00:29:49.675 "trsvcid": "$NVMF_PORT", 00:29:49.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.675 "hdgst": ${hdgst:-false}, 00:29:49.675 "ddgst": ${ddgst:-false} 00:29:49.675 }, 00:29:49.675 "method": "bdev_nvme_attach_controller" 00:29:49.675 } 00:29:49.675 EOF 00:29:49.675 )") 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.675 { 00:29:49.675 "params": { 00:29:49.675 "name": "Nvme$subsystem", 00:29:49.675 "trtype": "$TEST_TRANSPORT", 00:29:49.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.675 "adrfam": "ipv4", 00:29:49.675 "trsvcid": "$NVMF_PORT", 00:29:49.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.675 "hdgst": ${hdgst:-false}, 00:29:49.675 "ddgst": ${ddgst:-false} 00:29:49.675 }, 00:29:49.675 "method": "bdev_nvme_attach_controller" 00:29:49.675 } 00:29:49.675 EOF 00:29:49.675 )") 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.675 { 00:29:49.675 "params": { 00:29:49.675 "name": "Nvme$subsystem", 00:29:49.675 "trtype": "$TEST_TRANSPORT", 00:29:49.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.675 "adrfam": "ipv4", 00:29:49.675 "trsvcid": "$NVMF_PORT", 00:29:49.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.675 "hdgst": ${hdgst:-false}, 00:29:49.675 "ddgst": ${ddgst:-false} 00:29:49.675 }, 00:29:49.675 "method": "bdev_nvme_attach_controller" 00:29:49.675 } 00:29:49.675 EOF 00:29:49.675 )") 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:29:49.675 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:29:49.675 09:42:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:49.675 "params": { 00:29:49.675 "name": "Nvme1", 00:29:49.675 "trtype": "tcp", 00:29:49.675 "traddr": "10.0.0.2", 00:29:49.675 "adrfam": "ipv4", 00:29:49.675 "trsvcid": "4420", 00:29:49.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.675 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:49.675 "hdgst": false, 00:29:49.675 "ddgst": false 00:29:49.675 }, 00:29:49.675 "method": "bdev_nvme_attach_controller" 00:29:49.675 },{ 00:29:49.675 "params": { 00:29:49.675 "name": "Nvme2", 00:29:49.675 "trtype": "tcp", 00:29:49.675 "traddr": "10.0.0.2", 00:29:49.675 "adrfam": "ipv4", 00:29:49.675 "trsvcid": "4420", 00:29:49.675 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:49.675 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:49.675 "hdgst": false, 00:29:49.675 "ddgst": false 00:29:49.675 }, 00:29:49.675 "method": "bdev_nvme_attach_controller" 00:29:49.675 },{ 00:29:49.675 "params": { 00:29:49.675 "name": "Nvme3", 00:29:49.675 "trtype": "tcp", 00:29:49.675 "traddr": "10.0.0.2", 00:29:49.675 "adrfam": "ipv4", 00:29:49.675 "trsvcid": "4420", 00:29:49.675 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:49.675 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:49.675 "hdgst": false, 00:29:49.675 "ddgst": false 00:29:49.675 }, 00:29:49.675 "method": "bdev_nvme_attach_controller" 00:29:49.675 },{ 00:29:49.675 "params": { 00:29:49.675 "name": "Nvme4", 00:29:49.675 "trtype": "tcp", 00:29:49.675 "traddr": "10.0.0.2", 00:29:49.676 "adrfam": "ipv4", 00:29:49.676 "trsvcid": "4420", 00:29:49.676 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:49.676 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:49.676 "hdgst": false, 00:29:49.676 "ddgst": false 00:29:49.676 }, 00:29:49.676 "method": "bdev_nvme_attach_controller" 00:29:49.676 },{ 00:29:49.676 "params": { 00:29:49.676 "name": "Nvme5", 00:29:49.676 "trtype": "tcp", 00:29:49.676 "traddr": "10.0.0.2", 00:29:49.676 "adrfam": "ipv4", 00:29:49.676 "trsvcid": "4420", 00:29:49.676 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:49.676 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:49.676 "hdgst": false, 00:29:49.676 "ddgst": false 00:29:49.676 }, 00:29:49.676 "method": "bdev_nvme_attach_controller" 00:29:49.676 },{ 00:29:49.676 "params": { 00:29:49.676 "name": "Nvme6", 00:29:49.676 "trtype": "tcp", 00:29:49.676 "traddr": "10.0.0.2", 00:29:49.676 "adrfam": "ipv4", 00:29:49.676 "trsvcid": "4420", 00:29:49.676 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:49.676 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:49.676 "hdgst": false, 00:29:49.676 "ddgst": false 00:29:49.676 }, 00:29:49.676 "method": "bdev_nvme_attach_controller" 00:29:49.676 },{ 00:29:49.676 "params": { 00:29:49.676 "name": "Nvme7", 00:29:49.676 "trtype": "tcp", 00:29:49.676 "traddr": "10.0.0.2", 00:29:49.676 "adrfam": "ipv4", 00:29:49.676 "trsvcid": "4420", 00:29:49.676 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:49.676 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:49.676 "hdgst": false, 00:29:49.676 "ddgst": false 00:29:49.676 }, 00:29:49.676 "method": "bdev_nvme_attach_controller" 00:29:49.676 },{ 00:29:49.676 "params": { 00:29:49.676 "name": "Nvme8", 00:29:49.676 "trtype": "tcp", 00:29:49.676 "traddr": "10.0.0.2", 00:29:49.676 "adrfam": "ipv4", 00:29:49.676 "trsvcid": "4420", 00:29:49.676 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:49.676 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:49.676 "hdgst": false, 00:29:49.676 "ddgst": false 00:29:49.676 }, 00:29:49.676 "method": "bdev_nvme_attach_controller" 00:29:49.676 },{ 00:29:49.676 "params": { 00:29:49.676 "name": "Nvme9", 00:29:49.676 "trtype": "tcp", 00:29:49.676 "traddr": "10.0.0.2", 00:29:49.676 "adrfam": "ipv4", 00:29:49.676 "trsvcid": "4420", 00:29:49.676 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:49.676 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:49.676 "hdgst": false, 00:29:49.676 "ddgst": false 00:29:49.676 }, 00:29:49.676 "method": "bdev_nvme_attach_controller" 00:29:49.676 },{ 00:29:49.676 "params": { 00:29:49.676 "name": "Nvme10", 00:29:49.676 "trtype": "tcp", 00:29:49.676 "traddr": "10.0.0.2", 00:29:49.676 "adrfam": "ipv4", 00:29:49.676 "trsvcid": "4420", 00:29:49.676 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:49.676 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:49.676 "hdgst": false, 00:29:49.676 "ddgst": false 00:29:49.676 }, 00:29:49.676 "method": "bdev_nvme_attach_controller" 00:29:49.676 }' 00:29:49.676 [2024-05-16 09:42:43.153092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.676 [2024-05-16 09:42:43.217644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.590 Running I/O for 10 seconds... 00:29:51.590 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:51.590 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:29:51.590 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:51.590 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.590 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:51.591 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.591 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:51.591 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:51.591 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:51.591 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:51.591 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:29:51.591 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:29:51.591 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:51.591 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:51.591 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:51.591 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:51.591 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.591 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:51.591 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.591 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:29:51.591 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:29:51.591 09:42:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:51.851 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:51.851 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:51.851 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:51.851 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:51.851 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.851 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:51.851 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.851 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:29:51.851 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:29:51.851 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 425563 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 425563 ']' 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 425563 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 425563 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 425563' 00:29:52.121 killing process with pid 425563 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 425563 00:29:52.121 [2024-05-16 09:42:45.632965] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:52.121 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 425563 00:29:52.121 [2024-05-16 09:42:45.637751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.121 [2024-05-16 09:42:45.637897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.637995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.638000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.638004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.638009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.638013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.638018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.638023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.638029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.638033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.638038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.638042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.638047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.638056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.638060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.638065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.638069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.638075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.638079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.638084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a550 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641215] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.122 [2024-05-16 09:42:45.641344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.641456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208fc0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.123 [2024-05-16 09:42:45.642760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.642764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.642768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.642773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.642777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.642782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.642786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106a9f0 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.644462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106ae90 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.645343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.645359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.645364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.645369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.645374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.645378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.645384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.124 [2024-05-16 09:42:45.645389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.645664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b330 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.125 [2024-05-16 09:42:45.646259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.646438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x106b7d0 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.126 [2024-05-16 09:42:45.647144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647164] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.647337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1207d00 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648164] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.648245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.127 [2024-05-16 09:42:45.654958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.654994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655058] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa10610 is same with the state(5) to be set 00:29:52.128 [2024-05-16 09:42:45.655098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d5a60 is same with the state(5) to be set 00:29:52.128 [2024-05-16 09:42:45.655190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf558b0 is same with the state(5) to be set 00:29:52.128 [2024-05-16 09:42:45.655274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655337] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbb6c0 is same with the state(5) to be set 00:29:52.128 [2024-05-16 09:42:45.655362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4f9c0 is same with the state(5) to be set 00:29:52.128 [2024-05-16 09:42:45.655452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655515] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd90 is same with the state(5) to be set 00:29:52.128 [2024-05-16 09:42:45.655536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf36920 is same with the state(5) to be set 00:29:52.128 [2024-05-16 09:42:45.655618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.128 [2024-05-16 09:42:45.655680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.128 [2024-05-16 09:42:45.655687] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b5e0 is same with the state(5) to be set 00:29:52.128 [2024-05-16 09:42:45.657716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.128 [2024-05-16 09:42:45.657738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.128 [2024-05-16 09:42:45.657745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.657751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.657756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.657762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.657767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.657773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.657779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.657784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.657790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.657795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.657801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.657807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.657812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.657817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.657823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.657828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.657838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.657844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.657848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12081c0 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.658876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208680 is same with the state(5) to be set 00:29:52.129 [2024-05-16 09:42:45.659423] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:52.129 [2024-05-16 09:42:45.659485] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:52.129 [2024-05-16 09:42:45.659521] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:52.129 [2024-05-16 09:42:45.659566] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:52.129 [2024-05-16 09:42:45.659601] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:52.129 [2024-05-16 09:42:45.659633] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:52.129 [2024-05-16 09:42:45.659666] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:52.129 [2024-05-16 09:42:45.659775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.659786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.659803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.659811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.659820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.659828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.659838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.659845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.659857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.659866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.659875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.659885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.659896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.659904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.659915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.659923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.659933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.659946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.659957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.659965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.659977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.659985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.659996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.660006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.660015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.660022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.660031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.660039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.660049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.660063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.660072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.660079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.660089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.660096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.660106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.660113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.660123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.660130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.660139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.660147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.660157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.660164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.660176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.660184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.660194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.660202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.660211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.660219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.660229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.129 [2024-05-16 09:42:45.660236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.129 [2024-05-16 09:42:45.660246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208b20 is same with [2024-05-16 09:42:45.660796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:29:52.130 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208b20 is same with the state(5) to be set 00:29:52.130 [2024-05-16 09:42:45.660810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208b20 is same with the state(5) to be set 00:29:52.130 [2024-05-16 09:42:45.660820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208b20 is same with [2024-05-16 09:42:45.660820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:29:52.130 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.130 [2024-05-16 09:42:45.660828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208b20 is same with the state(5) to be set 00:29:52.130 [2024-05-16 09:42:45.660832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208b20 is same with the state(5) to be set 00:29:52.130 [2024-05-16 09:42:45.660832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.130 [2024-05-16 09:42:45.660838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208b20 is same with the state(5) to be set 00:29:52.130 [2024-05-16 09:42:45.660841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.660844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208b20 is same with the state(5) to be set 00:29:52.131 [2024-05-16 09:42:45.660850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208b20 is same with the state(5) to be set 00:29:52.131 [2024-05-16 09:42:45.660852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.660854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208b20 is same with the state(5) to be set 00:29:52.131 [2024-05-16 09:42:45.660860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208b20 is same with the state(5) to be set 00:29:52.131 [2024-05-16 09:42:45.660860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.660864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208b20 is same with the state(5) to be set 00:29:52.131 [2024-05-16 09:42:45.660870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208b20 is same with the state(5) to be set 00:29:52.131 [2024-05-16 09:42:45.660870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.660874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208b20 is same with the state(5) to be set 00:29:52.131 [2024-05-16 09:42:45.660878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.660889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.660896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.660906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.660913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.660923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.660930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.660983] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf06240 was disconnected and freed. reset controller. 00:29:52.131 [2024-05-16 09:42:45.661087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.131 [2024-05-16 09:42:45.661497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.131 [2024-05-16 09:42:45.661505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.661980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.661991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.662002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.662009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.662018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.662025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.662035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.662042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.662056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.662063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.662073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.662081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.662090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.662097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.662107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.662114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.662123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.662130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.132 [2024-05-16 09:42:45.662140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.132 [2024-05-16 09:42:45.662148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.662156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.662163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.662173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.662180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.662190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.662196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.662208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.662215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.662223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1093d60 is same with the state(5) to be set 00:29:52.133 [2024-05-16 09:42:45.662272] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1093d60 was disconnected and freed. reset controller. 00:29:52.133 [2024-05-16 09:42:45.662291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.662299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.662310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.662318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.662327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.662334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.662344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.662352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.133 [2024-05-16 09:42:45.669984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.133 [2024-05-16 09:42:45.669993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.134 [2024-05-16 09:42:45.670549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.670558] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf986a0 is same with the state(5) to be set 00:29:52.134 [2024-05-16 09:42:45.670633] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf986a0 was disconnected and freed. reset controller. 00:29:52.134 [2024-05-16 09:42:45.671975] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:52.134 [2024-05-16 09:42:45.672040] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4f6b0 (9): Bad file descriptor 00:29:52.134 [2024-05-16 09:42:45.672081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa10610 (9): Bad file descriptor 00:29:52.134 [2024-05-16 09:42:45.672116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.134 [2024-05-16 09:42:45.672127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.672136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.134 [2024-05-16 09:42:45.672144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.672152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.134 [2024-05-16 09:42:45.672159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.134 [2024-05-16 09:42:45.672168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.135 [2024-05-16 09:42:45.672176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.135 [2024-05-16 09:42:45.672184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbbcf0 is same with the state(5) to be set 00:29:52.135 [2024-05-16 09:42:45.672205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d5a60 (9): Bad file descriptor 00:29:52.135 [2024-05-16 09:42:45.672218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf558b0 (9): Bad file descriptor 00:29:52.135 [2024-05-16 09:42:45.672234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbb6c0 (9): Bad file descriptor 00:29:52.135 [2024-05-16 09:42:45.672253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4f9c0 (9): Bad file descriptor 00:29:52.135 [2024-05-16 09:42:45.672271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2cd90 (9): Bad file descriptor 00:29:52.135 [2024-05-16 09:42:45.672288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf36920 (9): Bad file descriptor 00:29:52.135 [2024-05-16 09:42:45.672304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0b5e0 (9): Bad file descriptor 00:29:52.404 [2024-05-16 09:42:45.674778] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:52.404 [2024-05-16 09:42:45.674862] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:52.404 [2024-05-16 09:42:45.674877] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:52.404 [2024-05-16 09:42:45.675610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.404 [2024-05-16 09:42:45.675934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.404 [2024-05-16 09:42:45.675946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4f6b0 with addr=10.0.0.2, port=4420 00:29:52.404 [2024-05-16 09:42:45.675954] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4f6b0 is same with the state(5) to be set 00:29:52.404 [2024-05-16 09:42:45.676285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.404 [2024-05-16 09:42:45.676655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.404 [2024-05-16 09:42:45.676669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d5a60 with addr=10.0.0.2, port=4420 00:29:52.404 [2024-05-16 09:42:45.676680] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d5a60 is same with the state(5) to be set 00:29:52.404 [2024-05-16 09:42:45.677029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.404 [2024-05-16 09:42:45.677353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.404 [2024-05-16 09:42:45.677364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf36920 with addr=10.0.0.2, port=4420 00:29:52.404 [2024-05-16 09:42:45.677372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf36920 is same with the state(5) to be set 00:29:52.404 [2024-05-16 09:42:45.677972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.677988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.404 [2024-05-16 09:42:45.678413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.404 [2024-05-16 09:42:45.678423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.678988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.678998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.679005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.679017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.679025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.679036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.679044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.405 [2024-05-16 09:42:45.679057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.405 [2024-05-16 09:42:45.679065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.679075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.679083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.679093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.679101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.679110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.679118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.679127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.679135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.679145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.679153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.679161] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf075f0 is same with the state(5) to be set 00:29:52.406 [2024-05-16 09:42:45.679210] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf075f0 was disconnected and freed. reset controller. 00:29:52.406 [2024-05-16 09:42:45.679242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4f6b0 (9): Bad file descriptor 00:29:52.406 [2024-05-16 09:42:45.679253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d5a60 (9): Bad file descriptor 00:29:52.406 [2024-05-16 09:42:45.679263] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf36920 (9): Bad file descriptor 00:29:52.406 [2024-05-16 09:42:45.680536] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:52.406 [2024-05-16 09:42:45.680559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbbcf0 (9): Bad file descriptor 00:29:52.406 [2024-05-16 09:42:45.680572] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:29:52.406 [2024-05-16 09:42:45.680580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:29:52.406 [2024-05-16 09:42:45.680590] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:52.406 [2024-05-16 09:42:45.680605] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:52.406 [2024-05-16 09:42:45.680616] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:52.406 [2024-05-16 09:42:45.680625] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:52.406 [2024-05-16 09:42:45.680639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:52.406 [2024-05-16 09:42:45.680647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:52.406 [2024-05-16 09:42:45.680655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:52.406 [2024-05-16 09:42:45.680719] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.406 [2024-05-16 09:42:45.680729] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.406 [2024-05-16 09:42:45.680736] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.406 [2024-05-16 09:42:45.681234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.406 [2024-05-16 09:42:45.681444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.406 [2024-05-16 09:42:45.681454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbbcf0 with addr=10.0.0.2, port=4420 00:29:52.406 [2024-05-16 09:42:45.681462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbbcf0 is same with the state(5) to be set 00:29:52.406 [2024-05-16 09:42:45.681505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbbcf0 (9): Bad file descriptor 00:29:52.406 [2024-05-16 09:42:45.681545] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:29:52.406 [2024-05-16 09:42:45.681553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:29:52.406 [2024-05-16 09:42:45.681560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:52.406 [2024-05-16 09:42:45.681598] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.406 [2024-05-16 09:42:45.682108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.682121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.682133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.682141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.682150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.682158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.682167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.682175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.682184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.682191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.682201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.682212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.682221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.682229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.682238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.682246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.682255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.682262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.682271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.682279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.682289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.682297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.682306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.682314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.682324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.682331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.682341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.682348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.682357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.682366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.682375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.682384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.682393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.682401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.682411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.406 [2024-05-16 09:42:45.682419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.406 [2024-05-16 09:42:45.682428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.682984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.682992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.683002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.683010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.683020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.683027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.683038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.683046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.407 [2024-05-16 09:42:45.683059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.407 [2024-05-16 09:42:45.683067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.683078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.683086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.683096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.683103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.683114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.683122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.683132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.683140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.683150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.683157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.683167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.683174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.683184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.683192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.683201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.683209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.683218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.683227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.683237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.683245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.683253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1092b40 is same with the state(5) to be set 00:29:52.408 [2024-05-16 09:42:45.684522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.408 [2024-05-16 09:42:45.684974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.408 [2024-05-16 09:42:45.684982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.684992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.409 [2024-05-16 09:42:45.685476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.409 [2024-05-16 09:42:45.685484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.685494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.685502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.685513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.685521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.685531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.685539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.685549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.685557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.685567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.685576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.685586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.685594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.685603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.685611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.685621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.685628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.685638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.685646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.685656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.685664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.685674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.685681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.685690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf99b80 is same with the state(5) to be set 00:29:52.410 [2024-05-16 09:42:45.687000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.410 [2024-05-16 09:42:45.687421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.410 [2024-05-16 09:42:45.687431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.687989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.687997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.688007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.688014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.688024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.688033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.688043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.688050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.411 [2024-05-16 09:42:45.688066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.411 [2024-05-16 09:42:45.688074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.688083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.688091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.688100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.688108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.688117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.688125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.688135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.688143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.688153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.688161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.688169] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9b030 is same with the state(5) to be set 00:29:52.412 [2024-05-16 09:42:45.689433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.412 [2024-05-16 09:42:45.689960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.412 [2024-05-16 09:42:45.689968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.689978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.689986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.689996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.690578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.413 [2024-05-16 09:42:45.690586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf03880 is same with the state(5) to be set 00:29:52.413 [2024-05-16 09:42:45.691847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.413 [2024-05-16 09:42:45.691861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.691873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.691883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.691894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.691903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.691915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.691924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.691936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.691946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.691956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.691963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.691973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.691982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.691993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.414 [2024-05-16 09:42:45.692381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.414 [2024-05-16 09:42:45.692391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.692984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.692992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.693001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.693009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.415 [2024-05-16 09:42:45.693018] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf04d60 is same with the state(5) to be set 00:29:52.415 [2024-05-16 09:42:45.694879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.415 [2024-05-16 09:42:45.694900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.694912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.694920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.694929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.694937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.694946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.694953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.694963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.694971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.694980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.694988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.694997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.416 [2024-05-16 09:42:45.695532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.416 [2024-05-16 09:42:45.695542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.695986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.695993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.696003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.696010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.696020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.417 [2024-05-16 09:42:45.696028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.417 [2024-05-16 09:42:45.696036] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108b4e0 is same with the state(5) to be set 00:29:52.417 [2024-05-16 09:42:45.697508] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.417 [2024-05-16 09:42:45.697530] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:52.417 [2024-05-16 09:42:45.697541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:52.417 [2024-05-16 09:42:45.697550] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:52.417 [2024-05-16 09:42:45.697634] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.417 [2024-05-16 09:42:45.697648] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.417 [2024-05-16 09:42:45.697722] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:52.417 task offset: 26752 on job bdev=Nvme8n1 fails 00:29:52.417 00:29:52.417 Latency(us) 00:29:52.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.417 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.417 Job: Nvme1n1 ended in about 0.96 seconds with error 00:29:52.417 Verification LBA range: start 0x0 length 0x400 00:29:52.417 Nvme1n1 : 0.96 133.04 8.31 66.52 0.00 317155.56 39758.51 260396.37 00:29:52.417 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.417 Job: Nvme2n1 ended in about 0.95 seconds with error 00:29:52.417 Verification LBA range: start 0x0 length 0x400 00:29:52.417 Nvme2n1 : 0.95 201.84 12.62 67.28 0.00 230171.73 16711.68 232434.35 00:29:52.417 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.417 Job: Nvme3n1 ended in about 0.95 seconds with error 00:29:52.417 Verification LBA range: start 0x0 length 0x400 00:29:52.417 Nvme3n1 : 0.95 201.59 12.60 67.20 0.00 225636.69 14636.37 251658.24 00:29:52.417 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.417 Job: Nvme4n1 ended in about 0.96 seconds with error 00:29:52.417 Verification LBA range: start 0x0 length 0x400 00:29:52.418 Nvme4n1 : 0.96 203.20 12.70 66.35 0.00 220402.19 19551.57 249910.61 00:29:52.418 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.418 Job: Nvme5n1 ended in about 0.97 seconds with error 00:29:52.418 Verification LBA range: start 0x0 length 0x400 00:29:52.418 Nvme5n1 : 0.97 132.36 8.27 66.18 0.00 292894.44 16930.13 255153.49 00:29:52.418 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.418 Job: Nvme6n1 ended in about 0.97 seconds with error 00:29:52.418 Verification LBA range: start 0x0 length 0x400 00:29:52.418 Nvme6n1 : 0.97 136.16 8.51 66.02 0.00 281355.08 18131.63 272629.76 00:29:52.418 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.418 Job: Nvme7n1 ended in about 0.97 seconds with error 00:29:52.418 Verification LBA range: start 0x0 length 0x400 00:29:52.418 Nvme7n1 : 0.97 197.55 12.35 65.85 0.00 211110.72 12506.45 251658.24 00:29:52.418 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.418 Job: Nvme8n1 ended in about 0.95 seconds with error 00:29:52.418 Verification LBA range: start 0x0 length 0x400 00:29:52.418 Nvme8n1 : 0.95 202.18 12.64 67.39 0.00 200686.83 11632.64 260396.37 00:29:52.418 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.418 Job: Nvme9n1 ended in about 0.96 seconds with error 00:29:52.418 Verification LBA range: start 0x0 length 0x400 00:29:52.418 Nvme9n1 : 0.96 200.37 12.52 66.79 0.00 197976.53 16930.13 244667.73 00:29:52.418 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:52.418 Job: Nvme10n1 ended in about 0.97 seconds with error 00:29:52.418 Verification LBA range: start 0x0 length 0x400 00:29:52.418 Nvme10n1 : 0.97 131.30 8.21 65.65 0.00 263218.63 21736.11 267386.88 00:29:52.418 =================================================================================================================== 00:29:52.418 Total : 1739.59 108.72 665.23 0.00 239146.57 11632.64 272629.76 00:29:52.418 [2024-05-16 09:42:45.723600] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:52.418 [2024-05-16 09:42:45.723646] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:52.418 [2024-05-16 09:42:45.723836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.724123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.724136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf0b5e0 with addr=10.0.0.2, port=4420 00:29:52.418 [2024-05-16 09:42:45.724146] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b5e0 is same with the state(5) to be set 00:29:52.418 [2024-05-16 09:42:45.724494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.724800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.724811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf2cd90 with addr=10.0.0.2, port=4420 00:29:52.418 [2024-05-16 09:42:45.724818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd90 is same with the state(5) to be set 00:29:52.418 [2024-05-16 09:42:45.725037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.725228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.725240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf558b0 with addr=10.0.0.2, port=4420 00:29:52.418 [2024-05-16 09:42:45.725248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf558b0 is same with the state(5) to be set 00:29:52.418 [2024-05-16 09:42:45.725436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.725773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.725783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa10610 with addr=10.0.0.2, port=4420 00:29:52.418 [2024-05-16 09:42:45.725796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa10610 is same with the state(5) to be set 00:29:52.418 [2024-05-16 09:42:45.727391] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:52.418 [2024-05-16 09:42:45.727407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:52.418 [2024-05-16 09:42:45.727417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:52.418 [2024-05-16 09:42:45.727426] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:52.418 [2024-05-16 09:42:45.727824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.728161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.728172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4f9c0 with addr=10.0.0.2, port=4420 00:29:52.418 [2024-05-16 09:42:45.728179] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4f9c0 is same with the state(5) to be set 00:29:52.418 [2024-05-16 09:42:45.728255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.728519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.728530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbb6c0 with addr=10.0.0.2, port=4420 00:29:52.418 [2024-05-16 09:42:45.728537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbb6c0 is same with the state(5) to be set 00:29:52.418 [2024-05-16 09:42:45.728549] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0b5e0 (9): Bad file descriptor 00:29:52.418 [2024-05-16 09:42:45.728561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2cd90 (9): Bad file descriptor 00:29:52.418 [2024-05-16 09:42:45.728571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf558b0 (9): Bad file descriptor 00:29:52.418 [2024-05-16 09:42:45.728581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa10610 (9): Bad file descriptor 00:29:52.418 [2024-05-16 09:42:45.728619] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.418 [2024-05-16 09:42:45.728631] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.418 [2024-05-16 09:42:45.728644] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.418 [2024-05-16 09:42:45.728656] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:52.418 [2024-05-16 09:42:45.728928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.729140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.729152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf36920 with addr=10.0.0.2, port=4420 00:29:52.418 [2024-05-16 09:42:45.729160] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf36920 is same with the state(5) to be set 00:29:52.418 [2024-05-16 09:42:45.729333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.729566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.729577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d5a60 with addr=10.0.0.2, port=4420 00:29:52.418 [2024-05-16 09:42:45.729584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d5a60 is same with the state(5) to be set 00:29:52.418 [2024-05-16 09:42:45.729910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.730284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.730295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4f6b0 with addr=10.0.0.2, port=4420 00:29:52.418 [2024-05-16 09:42:45.730308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4f6b0 is same with the state(5) to be set 00:29:52.418 [2024-05-16 09:42:45.730647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.730993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.418 [2024-05-16 09:42:45.731004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbbcf0 with addr=10.0.0.2, port=4420 00:29:52.418 [2024-05-16 09:42:45.731011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbbcf0 is same with the state(5) to be set 00:29:52.418 [2024-05-16 09:42:45.731021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4f9c0 (9): Bad file descriptor 00:29:52.418 [2024-05-16 09:42:45.731031] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbb6c0 (9): Bad file descriptor 00:29:52.418 [2024-05-16 09:42:45.731041] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.418 [2024-05-16 09:42:45.731048] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.418 [2024-05-16 09:42:45.731061] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.418 [2024-05-16 09:42:45.731073] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:52.418 [2024-05-16 09:42:45.731080] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:52.418 [2024-05-16 09:42:45.731087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:52.418 [2024-05-16 09:42:45.731098] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:29:52.418 [2024-05-16 09:42:45.731105] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:29:52.418 [2024-05-16 09:42:45.731112] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:52.418 [2024-05-16 09:42:45.731123] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:52.418 [2024-05-16 09:42:45.731130] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:52.418 [2024-05-16 09:42:45.731137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:52.419 [2024-05-16 09:42:45.731212] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.419 [2024-05-16 09:42:45.731221] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.419 [2024-05-16 09:42:45.731228] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.419 [2024-05-16 09:42:45.731234] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.419 [2024-05-16 09:42:45.731242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf36920 (9): Bad file descriptor 00:29:52.419 [2024-05-16 09:42:45.731252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d5a60 (9): Bad file descriptor 00:29:52.419 [2024-05-16 09:42:45.731261] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4f6b0 (9): Bad file descriptor 00:29:52.419 [2024-05-16 09:42:45.731271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbbcf0 (9): Bad file descriptor 00:29:52.419 [2024-05-16 09:42:45.731279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:52.419 [2024-05-16 09:42:45.731286] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:52.419 [2024-05-16 09:42:45.731292] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:52.419 [2024-05-16 09:42:45.731305] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:52.419 [2024-05-16 09:42:45.731313] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:52.419 [2024-05-16 09:42:45.731319] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:52.419 [2024-05-16 09:42:45.731347] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.419 [2024-05-16 09:42:45.731355] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.419 [2024-05-16 09:42:45.731362] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:52.419 [2024-05-16 09:42:45.731369] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:52.419 [2024-05-16 09:42:45.731376] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:52.419 [2024-05-16 09:42:45.731385] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:52.419 [2024-05-16 09:42:45.731391] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:52.419 [2024-05-16 09:42:45.731399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:52.419 [2024-05-16 09:42:45.731408] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:29:52.419 [2024-05-16 09:42:45.731414] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:29:52.419 [2024-05-16 09:42:45.731421] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:52.419 [2024-05-16 09:42:45.731431] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:29:52.419 [2024-05-16 09:42:45.731438] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:29:52.419 [2024-05-16 09:42:45.731445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:52.419 [2024-05-16 09:42:45.731474] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.419 [2024-05-16 09:42:45.731482] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.419 [2024-05-16 09:42:45.731489] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.419 [2024-05-16 09:42:45.731495] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.419 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:29:52.419 09:42:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:29:53.363 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 425848 00:29:53.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (425848) - No such process 00:29:53.363 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:29:53.363 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:29:53.363 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:53.363 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:53.363 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:53.624 rmmod nvme_tcp 00:29:53.624 rmmod nvme_fabrics 00:29:53.624 rmmod nvme_keyring 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:53.624 09:42:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.537 09:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:55.537 00:29:55.537 real 0m7.168s 00:29:55.537 user 0m16.772s 00:29:55.537 sys 0m1.109s 00:29:55.537 09:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:55.537 09:42:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:55.537 ************************************ 00:29:55.537 END TEST nvmf_shutdown_tc3 00:29:55.537 ************************************ 00:29:55.797 09:42:49 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:29:55.797 00:29:55.797 real 0m31.768s 00:29:55.797 user 1m14.540s 00:29:55.797 sys 0m8.873s 00:29:55.797 09:42:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:55.797 09:42:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:55.797 ************************************ 00:29:55.797 END TEST nvmf_shutdown 00:29:55.797 ************************************ 00:29:55.797 09:42:49 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:29:55.797 09:42:49 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:55.797 09:42:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:55.797 09:42:49 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:29:55.797 09:42:49 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:55.797 09:42:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:55.797 09:42:49 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:29:55.797 09:42:49 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:55.797 09:42:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:55.797 09:42:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:55.797 09:42:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:55.797 ************************************ 00:29:55.797 START TEST nvmf_multicontroller 00:29:55.798 ************************************ 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:55.798 * Looking for test storage... 00:29:55.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.798 09:42:49 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:29:56.059 09:42:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:02.646 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:02.646 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.646 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:02.647 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:02.647 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:02.647 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:02.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:02.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.780 ms 00:30:02.908 00:30:02.908 --- 10.0.0.2 ping statistics --- 00:30:02.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.908 rtt min/avg/max/mdev = 0.780/0.780/0.780/0.000 ms 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:02.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:02.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:30:02.908 00:30:02.908 --- 10.0.0.1 ping statistics --- 00:30:02.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.908 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=430676 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 430676 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 430676 ']' 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:02.908 09:42:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.908 [2024-05-16 09:42:56.441636] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:30:02.908 [2024-05-16 09:42:56.441698] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:03.169 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.169 [2024-05-16 09:42:56.531290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:03.169 [2024-05-16 09:42:56.623979] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:03.169 [2024-05-16 09:42:56.624034] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:03.169 [2024-05-16 09:42:56.624042] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:03.169 [2024-05-16 09:42:56.624049] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:03.169 [2024-05-16 09:42:56.624063] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:03.169 [2024-05-16 09:42:56.624186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:03.169 [2024-05-16 09:42:56.624483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:03.169 [2024-05-16 09:42:56.624484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.741 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:03.741 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:30:03.741 09:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:03.741 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:03.741 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:03.741 09:42:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:03.741 09:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:03.741 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.741 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:03.741 [2024-05-16 09:42:57.274642] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:03.741 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:03.741 09:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:03.741 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:03.741 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.003 Malloc0 00:30:04.003 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.003 09:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:04.003 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.003 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.003 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.003 09:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:04.003 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.003 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.003 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.003 09:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:04.003 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.003 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.003 [2024-05-16 09:42:57.339199] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:04.003 [2024-05-16 09:42:57.339414] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.003 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.003 09:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.004 [2024-05-16 09:42:57.351338] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.004 Malloc1 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=431027 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 431027 /var/tmp/bdevperf.sock 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 431027 ']' 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:04.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:04.004 09:42:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.949 NVMe0n1 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.949 1 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.949 request: 00:30:04.949 { 00:30:04.949 "name": "NVMe0", 00:30:04.949 "trtype": "tcp", 00:30:04.949 "traddr": "10.0.0.2", 00:30:04.949 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:04.949 "hostaddr": "10.0.0.2", 00:30:04.949 "hostsvcid": "60000", 00:30:04.949 "adrfam": "ipv4", 00:30:04.949 "trsvcid": "4420", 00:30:04.949 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.949 "method": "bdev_nvme_attach_controller", 00:30:04.949 "req_id": 1 00:30:04.949 } 00:30:04.949 Got JSON-RPC error response 00:30:04.949 response: 00:30:04.949 { 00:30:04.949 "code": -114, 00:30:04.949 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:04.949 } 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.949 request: 00:30:04.949 { 00:30:04.949 "name": "NVMe0", 00:30:04.949 "trtype": "tcp", 00:30:04.949 "traddr": "10.0.0.2", 00:30:04.949 "hostaddr": "10.0.0.2", 00:30:04.949 "hostsvcid": "60000", 00:30:04.949 "adrfam": "ipv4", 00:30:04.949 "trsvcid": "4420", 00:30:04.949 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:04.949 "method": "bdev_nvme_attach_controller", 00:30:04.949 "req_id": 1 00:30:04.949 } 00:30:04.949 Got JSON-RPC error response 00:30:04.949 response: 00:30:04.949 { 00:30:04.949 "code": -114, 00:30:04.949 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:04.949 } 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:04.949 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.950 request: 00:30:04.950 { 00:30:04.950 "name": "NVMe0", 00:30:04.950 "trtype": "tcp", 00:30:04.950 "traddr": "10.0.0.2", 00:30:04.950 "hostaddr": "10.0.0.2", 00:30:04.950 "hostsvcid": "60000", 00:30:04.950 "adrfam": "ipv4", 00:30:04.950 "trsvcid": "4420", 00:30:04.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.950 "multipath": "disable", 00:30:04.950 "method": "bdev_nvme_attach_controller", 00:30:04.950 "req_id": 1 00:30:04.950 } 00:30:04.950 Got JSON-RPC error response 00:30:04.950 response: 00:30:04.950 { 00:30:04.950 "code": -114, 00:30:04.950 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:30:04.950 } 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.950 request: 00:30:04.950 { 00:30:04.950 "name": "NVMe0", 00:30:04.950 "trtype": "tcp", 00:30:04.950 "traddr": "10.0.0.2", 00:30:04.950 "hostaddr": "10.0.0.2", 00:30:04.950 "hostsvcid": "60000", 00:30:04.950 "adrfam": "ipv4", 00:30:04.950 "trsvcid": "4420", 00:30:04.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:04.950 "multipath": "failover", 00:30:04.950 "method": "bdev_nvme_attach_controller", 00:30:04.950 "req_id": 1 00:30:04.950 } 00:30:04.950 Got JSON-RPC error response 00:30:04.950 response: 00:30:04.950 { 00:30:04.950 "code": -114, 00:30:04.950 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:04.950 } 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.950 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:05.211 00:30:05.211 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.211 09:42:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:05.211 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.211 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:05.211 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.211 09:42:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:30:05.211 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.211 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:05.472 00:30:05.472 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.472 09:42:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:05.472 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.472 09:42:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:05.472 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:05.472 09:42:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.472 09:42:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:05.472 09:42:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:06.415 0 00:30:06.415 09:42:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:06.415 09:42:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.415 09:42:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:06.415 09:42:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.415 09:42:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 431027 00:30:06.415 09:42:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 431027 ']' 00:30:06.415 09:42:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 431027 00:30:06.415 09:42:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:30:06.415 09:42:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:06.415 09:42:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 431027 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 431027' 00:30:06.677 killing process with pid 431027 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 431027 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 431027 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:30:06.677 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:06.677 [2024-05-16 09:42:57.469907] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:30:06.677 [2024-05-16 09:42:57.469964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431027 ] 00:30:06.677 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.677 [2024-05-16 09:42:57.528552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.677 [2024-05-16 09:42:57.592826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.677 [2024-05-16 09:42:58.808505] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name ccac34b9-3085-4168-be61-51c9b4553ec4 already exists 00:30:06.677 [2024-05-16 09:42:58.808536] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:ccac34b9-3085-4168-be61-51c9b4553ec4 alias for bdev NVMe1n1 00:30:06.677 [2024-05-16 09:42:58.808547] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:06.677 Running I/O for 1 seconds... 00:30:06.677 00:30:06.677 Latency(us) 00:30:06.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.677 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:06.677 NVMe0n1 : 1.00 27228.94 106.36 0.00 0.00 4689.56 2143.57 16165.55 00:30:06.677 =================================================================================================================== 00:30:06.677 Total : 27228.94 106.36 0.00 0.00 4689.56 2143.57 16165.55 00:30:06.677 Received shutdown signal, test time was about 1.000000 seconds 00:30:06.677 00:30:06.677 Latency(us) 00:30:06.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.677 =================================================================================================================== 00:30:06.677 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:06.677 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:06.677 09:43:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:06.677 rmmod nvme_tcp 00:30:06.677 rmmod nvme_fabrics 00:30:06.677 rmmod nvme_keyring 00:30:06.938 09:43:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:06.938 09:43:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:30:06.938 09:43:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:30:06.938 09:43:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 430676 ']' 00:30:06.938 09:43:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 430676 00:30:06.938 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 430676 ']' 00:30:06.938 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 430676 00:30:06.938 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:30:06.938 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:06.938 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 430676 00:30:06.938 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:06.938 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:06.938 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 430676' 00:30:06.938 killing process with pid 430676 00:30:06.938 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 430676 00:30:06.938 [2024-05-16 09:43:00.317237] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:06.938 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 430676 00:30:06.938 09:43:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:06.938 09:43:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:06.938 09:43:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:06.939 09:43:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:06.939 09:43:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:06.939 09:43:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.939 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:06.939 09:43:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.488 09:43:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:09.488 00:30:09.488 real 0m13.308s 00:30:09.488 user 0m16.532s 00:30:09.488 sys 0m5.953s 00:30:09.488 09:43:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:09.488 09:43:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:09.488 ************************************ 00:30:09.488 END TEST nvmf_multicontroller 00:30:09.488 ************************************ 00:30:09.488 09:43:02 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:09.488 09:43:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:09.488 09:43:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:09.488 09:43:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:09.488 ************************************ 00:30:09.488 START TEST nvmf_aer 00:30:09.488 ************************************ 00:30:09.488 09:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:09.488 * Looking for test storage... 00:30:09.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:09.488 09:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.488 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:09.488 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.488 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.488 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.488 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.488 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:30:09.489 09:43:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:16.096 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:16.097 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:16.097 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:16.097 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:16.097 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:16.097 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.359 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.359 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.359 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:16.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:30:16.359 00:30:16.359 --- 10.0.0.2 ping statistics --- 00:30:16.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.359 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:30:16.359 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:16.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:30:16.360 00:30:16.360 --- 10.0.0.1 ping statistics --- 00:30:16.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.360 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=435669 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 435669 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 435669 ']' 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:16.360 09:43:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:16.360 [2024-05-16 09:43:09.832180] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:30:16.360 [2024-05-16 09:43:09.832272] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.360 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.360 [2024-05-16 09:43:09.902693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:16.621 [2024-05-16 09:43:09.977352] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.621 [2024-05-16 09:43:09.977390] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.621 [2024-05-16 09:43:09.977397] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.621 [2024-05-16 09:43:09.977404] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.621 [2024-05-16 09:43:09.977410] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.621 [2024-05-16 09:43:09.977545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.621 [2024-05-16 09:43:09.977663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.621 [2024-05-16 09:43:09.977821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.621 [2024-05-16 09:43:09.977822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:17.194 [2024-05-16 09:43:10.650575] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:17.194 Malloc0 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:17.194 [2024-05-16 09:43:10.707090] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:17.194 [2024-05-16 09:43:10.707318] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:17.194 [ 00:30:17.194 { 00:30:17.194 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:17.194 "subtype": "Discovery", 00:30:17.194 "listen_addresses": [], 00:30:17.194 "allow_any_host": true, 00:30:17.194 "hosts": [] 00:30:17.194 }, 00:30:17.194 { 00:30:17.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:17.194 "subtype": "NVMe", 00:30:17.194 "listen_addresses": [ 00:30:17.194 { 00:30:17.194 "trtype": "TCP", 00:30:17.194 "adrfam": "IPv4", 00:30:17.194 "traddr": "10.0.0.2", 00:30:17.194 "trsvcid": "4420" 00:30:17.194 } 00:30:17.194 ], 00:30:17.194 "allow_any_host": true, 00:30:17.194 "hosts": [], 00:30:17.194 "serial_number": "SPDK00000000000001", 00:30:17.194 "model_number": "SPDK bdev Controller", 00:30:17.194 "max_namespaces": 2, 00:30:17.194 "min_cntlid": 1, 00:30:17.194 "max_cntlid": 65519, 00:30:17.194 "namespaces": [ 00:30:17.194 { 00:30:17.194 "nsid": 1, 00:30:17.194 "bdev_name": "Malloc0", 00:30:17.194 "name": "Malloc0", 00:30:17.194 "nguid": "A2CA87FFF4064B889C038833199B3F8C", 00:30:17.194 "uuid": "a2ca87ff-f406-4b88-9c03-8833199b3f8c" 00:30:17.194 } 00:30:17.194 ] 00:30:17.194 } 00:30:17.194 ] 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=435734 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:30:17.194 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:30:17.456 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.456 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:17.456 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:30:17.456 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:30:17.456 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:30:17.456 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:17.456 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:17.456 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:30:17.456 09:43:10 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:17.456 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.456 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:17.456 Malloc1 00:30:17.456 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.456 09:43:10 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:17.456 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.456 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:17.456 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.456 09:43:10 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:17.456 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.456 09:43:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:17.456 Asynchronous Event Request test 00:30:17.456 Attaching to 10.0.0.2 00:30:17.456 Attached to 10.0.0.2 00:30:17.456 Registering asynchronous event callbacks... 00:30:17.456 Starting namespace attribute notice tests for all controllers... 00:30:17.456 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:17.456 aer_cb - Changed Namespace 00:30:17.456 Cleaning up... 00:30:17.456 [ 00:30:17.456 { 00:30:17.456 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:17.456 "subtype": "Discovery", 00:30:17.456 "listen_addresses": [], 00:30:17.456 "allow_any_host": true, 00:30:17.456 "hosts": [] 00:30:17.456 }, 00:30:17.456 { 00:30:17.456 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:17.456 "subtype": "NVMe", 00:30:17.456 "listen_addresses": [ 00:30:17.456 { 00:30:17.456 "trtype": "TCP", 00:30:17.456 "adrfam": "IPv4", 00:30:17.456 "traddr": "10.0.0.2", 00:30:17.456 "trsvcid": "4420" 00:30:17.456 } 00:30:17.456 ], 00:30:17.456 "allow_any_host": true, 00:30:17.456 "hosts": [], 00:30:17.456 "serial_number": "SPDK00000000000001", 00:30:17.456 "model_number": "SPDK bdev Controller", 00:30:17.456 "max_namespaces": 2, 00:30:17.456 "min_cntlid": 1, 00:30:17.456 "max_cntlid": 65519, 00:30:17.456 "namespaces": [ 00:30:17.456 { 00:30:17.456 "nsid": 1, 00:30:17.456 "bdev_name": "Malloc0", 00:30:17.456 "name": "Malloc0", 00:30:17.456 "nguid": "A2CA87FFF4064B889C038833199B3F8C", 00:30:17.456 "uuid": "a2ca87ff-f406-4b88-9c03-8833199b3f8c" 00:30:17.456 }, 00:30:17.456 { 00:30:17.456 "nsid": 2, 00:30:17.456 "bdev_name": "Malloc1", 00:30:17.456 "name": "Malloc1", 00:30:17.456 "nguid": "E0DAAF49423B4C12884437286F766E4E", 00:30:17.456 "uuid": "e0daaf49-423b-4c12-8844-37286f766e4e" 00:30:17.456 } 00:30:17.456 ] 00:30:17.456 } 00:30:17.456 ] 00:30:17.456 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.456 09:43:11 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 435734 00:30:17.456 09:43:11 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:17.456 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.456 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:17.717 rmmod nvme_tcp 00:30:17.717 rmmod nvme_fabrics 00:30:17.717 rmmod nvme_keyring 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 435669 ']' 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 435669 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 435669 ']' 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 435669 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 435669 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 435669' 00:30:17.717 killing process with pid 435669 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 435669 00:30:17.717 [2024-05-16 09:43:11.172177] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:17.717 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 435669 00:30:17.979 09:43:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:17.979 09:43:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:17.979 09:43:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:17.979 09:43:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:17.979 09:43:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:17.979 09:43:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.979 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:17.979 09:43:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.894 09:43:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:19.894 00:30:19.894 real 0m10.745s 00:30:19.894 user 0m7.389s 00:30:19.894 sys 0m5.589s 00:30:19.894 09:43:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:19.894 09:43:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:19.894 ************************************ 00:30:19.894 END TEST nvmf_aer 00:30:19.894 ************************************ 00:30:19.894 09:43:13 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:19.894 09:43:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:19.894 09:43:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:19.894 09:43:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:20.155 ************************************ 00:30:20.155 START TEST nvmf_async_init 00:30:20.155 ************************************ 00:30:20.155 09:43:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:20.155 * Looking for test storage... 00:30:20.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:20.155 09:43:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:20.155 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:20.155 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=fd3d7d27c98742148644eca826145a2d 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:30:20.156 09:43:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:26.744 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.744 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:30:26.744 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:26.744 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:26.744 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:26.744 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:26.744 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:26.744 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:30:26.744 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:26.744 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:30:26.744 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:26.745 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:26.745 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:26.745 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:26.745 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.745 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:27.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:27.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.520 ms 00:30:27.006 00:30:27.006 --- 10.0.0.2 ping statistics --- 00:30:27.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.006 rtt min/avg/max/mdev = 0.520/0.520/0.520/0.000 ms 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:27.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:27.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:30:27.006 00:30:27.006 --- 10.0.0.1 ping statistics --- 00:30:27.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.006 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=440044 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 440044 00:30:27.006 09:43:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:27.007 09:43:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 440044 ']' 00:30:27.007 09:43:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.007 09:43:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:27.007 09:43:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.007 09:43:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:27.007 09:43:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:27.267 [2024-05-16 09:43:20.598749] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:30:27.268 [2024-05-16 09:43:20.598809] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:27.268 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.268 [2024-05-16 09:43:20.664853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.268 [2024-05-16 09:43:20.729576] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:27.268 [2024-05-16 09:43:20.729610] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:27.268 [2024-05-16 09:43:20.729618] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:27.268 [2024-05-16 09:43:20.729624] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:27.268 [2024-05-16 09:43:20.729630] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:27.268 [2024-05-16 09:43:20.729654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.838 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:27.838 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:30:27.838 09:43:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:27.838 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:27.838 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.098 09:43:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.099 [2024-05-16 09:43:21.416037] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.099 null0 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g fd3d7d27c98742148644eca826145a2d 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.099 [2024-05-16 09:43:21.476138] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:28.099 [2024-05-16 09:43:21.476320] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.099 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.361 nvme0n1 00:30:28.361 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.361 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:28.361 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.361 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.361 [ 00:30:28.361 { 00:30:28.361 "name": "nvme0n1", 00:30:28.361 "aliases": [ 00:30:28.361 "fd3d7d27-c987-4214-8644-eca826145a2d" 00:30:28.361 ], 00:30:28.361 "product_name": "NVMe disk", 00:30:28.361 "block_size": 512, 00:30:28.361 "num_blocks": 2097152, 00:30:28.361 "uuid": "fd3d7d27-c987-4214-8644-eca826145a2d", 00:30:28.361 "assigned_rate_limits": { 00:30:28.361 "rw_ios_per_sec": 0, 00:30:28.361 "rw_mbytes_per_sec": 0, 00:30:28.361 "r_mbytes_per_sec": 0, 00:30:28.361 "w_mbytes_per_sec": 0 00:30:28.361 }, 00:30:28.361 "claimed": false, 00:30:28.361 "zoned": false, 00:30:28.361 "supported_io_types": { 00:30:28.361 "read": true, 00:30:28.361 "write": true, 00:30:28.361 "unmap": false, 00:30:28.361 "write_zeroes": true, 00:30:28.361 "flush": true, 00:30:28.361 "reset": true, 00:30:28.361 "compare": true, 00:30:28.361 "compare_and_write": true, 00:30:28.361 "abort": true, 00:30:28.361 "nvme_admin": true, 00:30:28.361 "nvme_io": true 00:30:28.361 }, 00:30:28.361 "memory_domains": [ 00:30:28.361 { 00:30:28.361 "dma_device_id": "system", 00:30:28.361 "dma_device_type": 1 00:30:28.361 } 00:30:28.361 ], 00:30:28.361 "driver_specific": { 00:30:28.361 "nvme": [ 00:30:28.361 { 00:30:28.361 "trid": { 00:30:28.361 "trtype": "TCP", 00:30:28.361 "adrfam": "IPv4", 00:30:28.361 "traddr": "10.0.0.2", 00:30:28.361 "trsvcid": "4420", 00:30:28.361 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:28.361 }, 00:30:28.361 "ctrlr_data": { 00:30:28.361 "cntlid": 1, 00:30:28.361 "vendor_id": "0x8086", 00:30:28.361 "model_number": "SPDK bdev Controller", 00:30:28.361 "serial_number": "00000000000000000000", 00:30:28.361 "firmware_revision": "24.05", 00:30:28.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:28.361 "oacs": { 00:30:28.361 "security": 0, 00:30:28.361 "format": 0, 00:30:28.361 "firmware": 0, 00:30:28.361 "ns_manage": 0 00:30:28.361 }, 00:30:28.361 "multi_ctrlr": true, 00:30:28.361 "ana_reporting": false 00:30:28.361 }, 00:30:28.361 "vs": { 00:30:28.361 "nvme_version": "1.3" 00:30:28.361 }, 00:30:28.361 "ns_data": { 00:30:28.361 "id": 1, 00:30:28.361 "can_share": true 00:30:28.361 } 00:30:28.361 } 00:30:28.361 ], 00:30:28.361 "mp_policy": "active_passive" 00:30:28.361 } 00:30:28.361 } 00:30:28.361 ] 00:30:28.361 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.361 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:28.361 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.361 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.361 [2024-05-16 09:43:21.746130] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:28.361 [2024-05-16 09:43:21.746191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2666ae0 (9): Bad file descriptor 00:30:28.361 [2024-05-16 09:43:21.878150] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:28.361 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.361 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:28.361 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.361 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.361 [ 00:30:28.361 { 00:30:28.361 "name": "nvme0n1", 00:30:28.361 "aliases": [ 00:30:28.361 "fd3d7d27-c987-4214-8644-eca826145a2d" 00:30:28.361 ], 00:30:28.361 "product_name": "NVMe disk", 00:30:28.361 "block_size": 512, 00:30:28.361 "num_blocks": 2097152, 00:30:28.361 "uuid": "fd3d7d27-c987-4214-8644-eca826145a2d", 00:30:28.361 "assigned_rate_limits": { 00:30:28.361 "rw_ios_per_sec": 0, 00:30:28.361 "rw_mbytes_per_sec": 0, 00:30:28.361 "r_mbytes_per_sec": 0, 00:30:28.361 "w_mbytes_per_sec": 0 00:30:28.361 }, 00:30:28.361 "claimed": false, 00:30:28.361 "zoned": false, 00:30:28.361 "supported_io_types": { 00:30:28.361 "read": true, 00:30:28.361 "write": true, 00:30:28.361 "unmap": false, 00:30:28.361 "write_zeroes": true, 00:30:28.361 "flush": true, 00:30:28.361 "reset": true, 00:30:28.361 "compare": true, 00:30:28.361 "compare_and_write": true, 00:30:28.361 "abort": true, 00:30:28.361 "nvme_admin": true, 00:30:28.361 "nvme_io": true 00:30:28.361 }, 00:30:28.361 "memory_domains": [ 00:30:28.361 { 00:30:28.361 "dma_device_id": "system", 00:30:28.361 "dma_device_type": 1 00:30:28.361 } 00:30:28.361 ], 00:30:28.361 "driver_specific": { 00:30:28.361 "nvme": [ 00:30:28.361 { 00:30:28.361 "trid": { 00:30:28.361 "trtype": "TCP", 00:30:28.361 "adrfam": "IPv4", 00:30:28.361 "traddr": "10.0.0.2", 00:30:28.361 "trsvcid": "4420", 00:30:28.361 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:28.361 }, 00:30:28.361 "ctrlr_data": { 00:30:28.361 "cntlid": 2, 00:30:28.361 "vendor_id": "0x8086", 00:30:28.361 "model_number": "SPDK bdev Controller", 00:30:28.361 "serial_number": "00000000000000000000", 00:30:28.361 "firmware_revision": "24.05", 00:30:28.362 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:28.362 "oacs": { 00:30:28.362 "security": 0, 00:30:28.362 "format": 0, 00:30:28.362 "firmware": 0, 00:30:28.362 "ns_manage": 0 00:30:28.362 }, 00:30:28.362 "multi_ctrlr": true, 00:30:28.362 "ana_reporting": false 00:30:28.362 }, 00:30:28.362 "vs": { 00:30:28.362 "nvme_version": "1.3" 00:30:28.362 }, 00:30:28.362 "ns_data": { 00:30:28.362 "id": 1, 00:30:28.362 "can_share": true 00:30:28.362 } 00:30:28.362 } 00:30:28.362 ], 00:30:28.362 "mp_policy": "active_passive" 00:30:28.362 } 00:30:28.362 } 00:30:28.362 ] 00:30:28.362 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.362 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.362 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.362 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.362 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.362 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:28.624 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Grba0mUdgP 00:30:28.624 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:28.624 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Grba0mUdgP 00:30:28.624 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:28.624 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.624 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.624 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.624 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:28.624 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.624 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.624 [2024-05-16 09:43:21.942741] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:28.624 [2024-05-16 09:43:21.942858] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:28.624 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.624 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Grba0mUdgP 00:30:28.624 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.624 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.624 [2024-05-16 09:43:21.954763] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:28.624 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.624 09:43:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Grba0mUdgP 00:30:28.624 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.624 09:43:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.624 [2024-05-16 09:43:21.966797] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:28.624 [2024-05-16 09:43:21.966833] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:28.624 nvme0n1 00:30:28.624 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.624 09:43:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:28.624 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.624 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.624 [ 00:30:28.624 { 00:30:28.624 "name": "nvme0n1", 00:30:28.624 "aliases": [ 00:30:28.624 "fd3d7d27-c987-4214-8644-eca826145a2d" 00:30:28.624 ], 00:30:28.624 "product_name": "NVMe disk", 00:30:28.624 "block_size": 512, 00:30:28.624 "num_blocks": 2097152, 00:30:28.624 "uuid": "fd3d7d27-c987-4214-8644-eca826145a2d", 00:30:28.624 "assigned_rate_limits": { 00:30:28.624 "rw_ios_per_sec": 0, 00:30:28.624 "rw_mbytes_per_sec": 0, 00:30:28.624 "r_mbytes_per_sec": 0, 00:30:28.624 "w_mbytes_per_sec": 0 00:30:28.624 }, 00:30:28.624 "claimed": false, 00:30:28.624 "zoned": false, 00:30:28.624 "supported_io_types": { 00:30:28.624 "read": true, 00:30:28.624 "write": true, 00:30:28.624 "unmap": false, 00:30:28.624 "write_zeroes": true, 00:30:28.624 "flush": true, 00:30:28.624 "reset": true, 00:30:28.624 "compare": true, 00:30:28.624 "compare_and_write": true, 00:30:28.625 "abort": true, 00:30:28.625 "nvme_admin": true, 00:30:28.625 "nvme_io": true 00:30:28.625 }, 00:30:28.625 "memory_domains": [ 00:30:28.625 { 00:30:28.625 "dma_device_id": "system", 00:30:28.625 "dma_device_type": 1 00:30:28.625 } 00:30:28.625 ], 00:30:28.625 "driver_specific": { 00:30:28.625 "nvme": [ 00:30:28.625 { 00:30:28.625 "trid": { 00:30:28.625 "trtype": "TCP", 00:30:28.625 "adrfam": "IPv4", 00:30:28.625 "traddr": "10.0.0.2", 00:30:28.625 "trsvcid": "4421", 00:30:28.625 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:28.625 }, 00:30:28.625 "ctrlr_data": { 00:30:28.625 "cntlid": 3, 00:30:28.625 "vendor_id": "0x8086", 00:30:28.625 "model_number": "SPDK bdev Controller", 00:30:28.625 "serial_number": "00000000000000000000", 00:30:28.625 "firmware_revision": "24.05", 00:30:28.625 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:28.625 "oacs": { 00:30:28.625 "security": 0, 00:30:28.625 "format": 0, 00:30:28.625 "firmware": 0, 00:30:28.625 "ns_manage": 0 00:30:28.625 }, 00:30:28.625 "multi_ctrlr": true, 00:30:28.625 "ana_reporting": false 00:30:28.625 }, 00:30:28.625 "vs": { 00:30:28.625 "nvme_version": "1.3" 00:30:28.625 }, 00:30:28.625 "ns_data": { 00:30:28.625 "id": 1, 00:30:28.625 "can_share": true 00:30:28.625 } 00:30:28.625 } 00:30:28.625 ], 00:30:28.625 "mp_policy": "active_passive" 00:30:28.625 } 00:30:28.625 } 00:30:28.625 ] 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Grba0mUdgP 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:28.625 rmmod nvme_tcp 00:30:28.625 rmmod nvme_fabrics 00:30:28.625 rmmod nvme_keyring 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 440044 ']' 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 440044 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 440044 ']' 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 440044 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:28.625 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 440044 00:30:28.885 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:28.885 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:28.885 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 440044' 00:30:28.885 killing process with pid 440044 00:30:28.885 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 440044 00:30:28.885 [2024-05-16 09:43:22.207468] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:28.885 [2024-05-16 09:43:22.207498] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:28.885 [2024-05-16 09:43:22.207506] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:28.885 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 440044 00:30:28.885 09:43:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:28.885 09:43:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:28.885 09:43:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:28.885 09:43:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:28.885 09:43:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:28.885 09:43:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.885 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:28.885 09:43:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.433 09:43:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:31.433 00:30:31.433 real 0m10.931s 00:30:31.433 user 0m3.977s 00:30:31.433 sys 0m5.410s 00:30:31.433 09:43:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:31.433 09:43:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:31.433 ************************************ 00:30:31.433 END TEST nvmf_async_init 00:30:31.433 ************************************ 00:30:31.434 09:43:24 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:31.434 09:43:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:31.434 09:43:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:31.434 09:43:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:31.434 ************************************ 00:30:31.434 START TEST dma 00:30:31.434 ************************************ 00:30:31.434 09:43:24 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:31.434 * Looking for test storage... 00:30:31.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:31.434 09:43:24 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.434 09:43:24 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.434 09:43:24 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.434 09:43:24 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.434 09:43:24 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.434 09:43:24 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.434 09:43:24 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.434 09:43:24 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:30:31.434 09:43:24 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:31.434 09:43:24 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:31.434 09:43:24 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:31.434 09:43:24 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:30:31.434 00:30:31.434 real 0m0.129s 00:30:31.434 user 0m0.061s 00:30:31.434 sys 0m0.074s 00:30:31.434 09:43:24 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:31.434 09:43:24 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:30:31.434 ************************************ 00:30:31.434 END TEST dma 00:30:31.434 ************************************ 00:30:31.434 09:43:24 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:31.434 09:43:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:31.434 09:43:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:31.434 09:43:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:31.434 ************************************ 00:30:31.434 START TEST nvmf_identify 00:30:31.434 ************************************ 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:31.434 * Looking for test storage... 00:30:31.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:30:31.434 09:43:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:39.589 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:39.589 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:30:39.589 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:39.589 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:39.589 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:39.589 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:39.589 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:39.589 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:30:39.589 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:39.589 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:30:39.589 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:30:39.589 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:30:39.589 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:30:39.589 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:30:39.589 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:30:39.589 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:39.589 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:39.589 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:39.590 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:39.590 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:39.590 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:39.590 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:39.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:39.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:30:39.590 00:30:39.590 --- 10.0.0.2 ping statistics --- 00:30:39.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.590 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:39.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:39.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:30:39.590 00:30:39.590 --- 10.0.0.1 ping statistics --- 00:30:39.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.590 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=444442 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 444442 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 444442 ']' 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:39.590 09:43:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:39.590 [2024-05-16 09:43:31.999910] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:30:39.590 [2024-05-16 09:43:31.999957] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.590 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.590 [2024-05-16 09:43:32.067529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:39.591 [2024-05-16 09:43:32.133681] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.591 [2024-05-16 09:43:32.133720] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.591 [2024-05-16 09:43:32.133729] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.591 [2024-05-16 09:43:32.133736] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.591 [2024-05-16 09:43:32.133742] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.591 [2024-05-16 09:43:32.133890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.591 [2024-05-16 09:43:32.134033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:39.591 [2024-05-16 09:43:32.134097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:39.591 [2024-05-16 09:43:32.134113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:39.591 [2024-05-16 09:43:32.241757] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:39.591 Malloc0 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:39.591 [2024-05-16 09:43:32.334526] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:39.591 [2024-05-16 09:43:32.334772] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:39.591 [ 00:30:39.591 { 00:30:39.591 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:39.591 "subtype": "Discovery", 00:30:39.591 "listen_addresses": [ 00:30:39.591 { 00:30:39.591 "trtype": "TCP", 00:30:39.591 "adrfam": "IPv4", 00:30:39.591 "traddr": "10.0.0.2", 00:30:39.591 "trsvcid": "4420" 00:30:39.591 } 00:30:39.591 ], 00:30:39.591 "allow_any_host": true, 00:30:39.591 "hosts": [] 00:30:39.591 }, 00:30:39.591 { 00:30:39.591 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:39.591 "subtype": "NVMe", 00:30:39.591 "listen_addresses": [ 00:30:39.591 { 00:30:39.591 "trtype": "TCP", 00:30:39.591 "adrfam": "IPv4", 00:30:39.591 "traddr": "10.0.0.2", 00:30:39.591 "trsvcid": "4420" 00:30:39.591 } 00:30:39.591 ], 00:30:39.591 "allow_any_host": true, 00:30:39.591 "hosts": [], 00:30:39.591 "serial_number": "SPDK00000000000001", 00:30:39.591 "model_number": "SPDK bdev Controller", 00:30:39.591 "max_namespaces": 32, 00:30:39.591 "min_cntlid": 1, 00:30:39.591 "max_cntlid": 65519, 00:30:39.591 "namespaces": [ 00:30:39.591 { 00:30:39.591 "nsid": 1, 00:30:39.591 "bdev_name": "Malloc0", 00:30:39.591 "name": "Malloc0", 00:30:39.591 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:39.591 "eui64": "ABCDEF0123456789", 00:30:39.591 "uuid": "b09ea1da-014a-45e4-9b37-f9ffbba7c499" 00:30:39.591 } 00:30:39.591 ] 00:30:39.591 } 00:30:39.591 ] 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.591 09:43:32 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:39.591 [2024-05-16 09:43:32.396639] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:30:39.591 [2024-05-16 09:43:32.396704] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444630 ] 00:30:39.591 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.591 [2024-05-16 09:43:32.427708] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:30:39.591 [2024-05-16 09:43:32.427754] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:39.591 [2024-05-16 09:43:32.427760] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:39.591 [2024-05-16 09:43:32.427779] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:39.591 [2024-05-16 09:43:32.427787] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:39.591 [2024-05-16 09:43:32.431087] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:30:39.591 [2024-05-16 09:43:32.431118] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1386ec0 0 00:30:39.591 [2024-05-16 09:43:32.439061] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:39.591 [2024-05-16 09:43:32.439072] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:39.591 [2024-05-16 09:43:32.439076] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:39.591 [2024-05-16 09:43:32.439079] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:39.591 [2024-05-16 09:43:32.439114] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.591 [2024-05-16 09:43:32.439119] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.591 [2024-05-16 09:43:32.439123] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1386ec0) 00:30:39.591 [2024-05-16 09:43:32.439136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:39.591 [2024-05-16 09:43:32.439151] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1409df0, cid 0, qid 0 00:30:39.591 [2024-05-16 09:43:32.445062] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.591 [2024-05-16 09:43:32.445072] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.591 [2024-05-16 09:43:32.445075] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.591 [2024-05-16 09:43:32.445080] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1409df0) on tqpair=0x1386ec0 00:30:39.591 [2024-05-16 09:43:32.445090] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:39.591 [2024-05-16 09:43:32.445097] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:30:39.591 [2024-05-16 09:43:32.445102] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:30:39.591 [2024-05-16 09:43:32.445114] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.591 [2024-05-16 09:43:32.445118] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.591 [2024-05-16 09:43:32.445122] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1386ec0) 00:30:39.591 [2024-05-16 09:43:32.445129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.592 [2024-05-16 09:43:32.445142] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1409df0, cid 0, qid 0 00:30:39.592 [2024-05-16 09:43:32.445257] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.592 [2024-05-16 09:43:32.445263] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.592 [2024-05-16 09:43:32.445266] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.445270] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1409df0) on tqpair=0x1386ec0 00:30:39.592 [2024-05-16 09:43:32.445276] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:30:39.592 [2024-05-16 09:43:32.445283] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:30:39.592 [2024-05-16 09:43:32.445289] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.445293] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.445297] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1386ec0) 00:30:39.592 [2024-05-16 09:43:32.445303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.592 [2024-05-16 09:43:32.445317] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1409df0, cid 0, qid 0 00:30:39.592 [2024-05-16 09:43:32.445522] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.592 [2024-05-16 09:43:32.445528] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.592 [2024-05-16 09:43:32.445531] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.445535] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1409df0) on tqpair=0x1386ec0 00:30:39.592 [2024-05-16 09:43:32.445541] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:30:39.592 [2024-05-16 09:43:32.445548] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:30:39.592 [2024-05-16 09:43:32.445555] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.445559] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.445562] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1386ec0) 00:30:39.592 [2024-05-16 09:43:32.445569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.592 [2024-05-16 09:43:32.445579] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1409df0, cid 0, qid 0 00:30:39.592 [2024-05-16 09:43:32.445732] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.592 [2024-05-16 09:43:32.445738] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.592 [2024-05-16 09:43:32.445742] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.445746] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1409df0) on tqpair=0x1386ec0 00:30:39.592 [2024-05-16 09:43:32.445751] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:39.592 [2024-05-16 09:43:32.445760] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.445764] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.445767] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1386ec0) 00:30:39.592 [2024-05-16 09:43:32.445774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.592 [2024-05-16 09:43:32.445783] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1409df0, cid 0, qid 0 00:30:39.592 [2024-05-16 09:43:32.445957] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.592 [2024-05-16 09:43:32.445963] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.592 [2024-05-16 09:43:32.445966] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.445970] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1409df0) on tqpair=0x1386ec0 00:30:39.592 [2024-05-16 09:43:32.445975] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:30:39.592 [2024-05-16 09:43:32.445980] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:30:39.592 [2024-05-16 09:43:32.445987] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:39.592 [2024-05-16 09:43:32.450057] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:30:39.592 [2024-05-16 09:43:32.450064] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:39.592 [2024-05-16 09:43:32.450073] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.450080] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.450084] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1386ec0) 00:30:39.592 [2024-05-16 09:43:32.450090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.592 [2024-05-16 09:43:32.450101] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1409df0, cid 0, qid 0 00:30:39.592 [2024-05-16 09:43:32.450299] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.592 [2024-05-16 09:43:32.450305] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.592 [2024-05-16 09:43:32.450309] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.450312] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1409df0) on tqpair=0x1386ec0 00:30:39.592 [2024-05-16 09:43:32.450318] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:39.592 [2024-05-16 09:43:32.450327] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.450330] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.450334] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1386ec0) 00:30:39.592 [2024-05-16 09:43:32.450341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.592 [2024-05-16 09:43:32.450350] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1409df0, cid 0, qid 0 00:30:39.592 [2024-05-16 09:43:32.450537] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.592 [2024-05-16 09:43:32.450543] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.592 [2024-05-16 09:43:32.450546] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.450550] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1409df0) on tqpair=0x1386ec0 00:30:39.592 [2024-05-16 09:43:32.450555] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:39.592 [2024-05-16 09:43:32.450560] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:30:39.592 [2024-05-16 09:43:32.450567] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:30:39.592 [2024-05-16 09:43:32.450575] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:30:39.592 [2024-05-16 09:43:32.450583] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.450587] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1386ec0) 00:30:39.592 [2024-05-16 09:43:32.450594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.592 [2024-05-16 09:43:32.450603] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1409df0, cid 0, qid 0 00:30:39.592 [2024-05-16 09:43:32.450776] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:39.592 [2024-05-16 09:43:32.450783] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:39.592 [2024-05-16 09:43:32.450787] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.450790] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1386ec0): datao=0, datal=4096, cccid=0 00:30:39.592 [2024-05-16 09:43:32.450795] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1409df0) on tqpair(0x1386ec0): expected_datao=0, payload_size=4096 00:30:39.592 [2024-05-16 09:43:32.450800] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.450814] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.450821] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.492218] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.592 [2024-05-16 09:43:32.492233] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.592 [2024-05-16 09:43:32.492237] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.592 [2024-05-16 09:43:32.492241] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1409df0) on tqpair=0x1386ec0 00:30:39.592 [2024-05-16 09:43:32.492252] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:30:39.593 [2024-05-16 09:43:32.492257] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:30:39.593 [2024-05-16 09:43:32.492261] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:30:39.593 [2024-05-16 09:43:32.492270] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:30:39.593 [2024-05-16 09:43:32.492274] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:30:39.593 [2024-05-16 09:43:32.492279] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:30:39.593 [2024-05-16 09:43:32.492288] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:30:39.593 [2024-05-16 09:43:32.492295] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.492299] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.492302] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1386ec0) 00:30:39.593 [2024-05-16 09:43:32.492311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:39.593 [2024-05-16 09:43:32.492324] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1409df0, cid 0, qid 0 00:30:39.593 [2024-05-16 09:43:32.492542] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.593 [2024-05-16 09:43:32.492549] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.593 [2024-05-16 09:43:32.492553] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.492556] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1409df0) on tqpair=0x1386ec0 00:30:39.593 [2024-05-16 09:43:32.492564] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.492568] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.492572] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1386ec0) 00:30:39.593 [2024-05-16 09:43:32.492578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.593 [2024-05-16 09:43:32.492583] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.492587] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.492590] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1386ec0) 00:30:39.593 [2024-05-16 09:43:32.492596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.593 [2024-05-16 09:43:32.492602] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.492605] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.492609] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1386ec0) 00:30:39.593 [2024-05-16 09:43:32.492615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.593 [2024-05-16 09:43:32.492620] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.492626] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.492630] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1386ec0) 00:30:39.593 [2024-05-16 09:43:32.492636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.593 [2024-05-16 09:43:32.492640] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:30:39.593 [2024-05-16 09:43:32.492650] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:39.593 [2024-05-16 09:43:32.492657] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.492661] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1386ec0) 00:30:39.593 [2024-05-16 09:43:32.492667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.593 [2024-05-16 09:43:32.492679] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1409df0, cid 0, qid 0 00:30:39.593 [2024-05-16 09:43:32.492684] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1409f50, cid 1, qid 0 00:30:39.593 [2024-05-16 09:43:32.492688] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a0b0, cid 2, qid 0 00:30:39.593 [2024-05-16 09:43:32.492693] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a210, cid 3, qid 0 00:30:39.593 [2024-05-16 09:43:32.492698] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a370, cid 4, qid 0 00:30:39.593 [2024-05-16 09:43:32.492850] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.593 [2024-05-16 09:43:32.492856] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.593 [2024-05-16 09:43:32.492859] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.492863] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a370) on tqpair=0x1386ec0 00:30:39.593 [2024-05-16 09:43:32.492869] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:30:39.593 [2024-05-16 09:43:32.492873] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:30:39.593 [2024-05-16 09:43:32.492884] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.492888] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1386ec0) 00:30:39.593 [2024-05-16 09:43:32.492894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.593 [2024-05-16 09:43:32.492904] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a370, cid 4, qid 0 00:30:39.593 [2024-05-16 09:43:32.492991] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:39.593 [2024-05-16 09:43:32.492998] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:39.593 [2024-05-16 09:43:32.493001] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.493005] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1386ec0): datao=0, datal=4096, cccid=4 00:30:39.593 [2024-05-16 09:43:32.493009] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140a370) on tqpair(0x1386ec0): expected_datao=0, payload_size=4096 00:30:39.593 [2024-05-16 09:43:32.493014] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.493029] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.493033] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.493228] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.593 [2024-05-16 09:43:32.493235] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.593 [2024-05-16 09:43:32.493240] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.493244] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a370) on tqpair=0x1386ec0 00:30:39.593 [2024-05-16 09:43:32.493256] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:30:39.593 [2024-05-16 09:43:32.493278] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.493283] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1386ec0) 00:30:39.593 [2024-05-16 09:43:32.493289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.593 [2024-05-16 09:43:32.493296] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.493300] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.493303] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1386ec0) 00:30:39.593 [2024-05-16 09:43:32.493309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.593 [2024-05-16 09:43:32.493322] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a370, cid 4, qid 0 00:30:39.593 [2024-05-16 09:43:32.493327] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a4d0, cid 5, qid 0 00:30:39.593 [2024-05-16 09:43:32.493560] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:39.593 [2024-05-16 09:43:32.493566] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:39.593 [2024-05-16 09:43:32.493569] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.493573] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1386ec0): datao=0, datal=1024, cccid=4 00:30:39.593 [2024-05-16 09:43:32.493577] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140a370) on tqpair(0x1386ec0): expected_datao=0, payload_size=1024 00:30:39.593 [2024-05-16 09:43:32.493581] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.493588] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.493591] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:39.593 [2024-05-16 09:43:32.493597] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.594 [2024-05-16 09:43:32.493603] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.594 [2024-05-16 09:43:32.493606] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.594 [2024-05-16 09:43:32.493609] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a4d0) on tqpair=0x1386ec0 00:30:39.594 [2024-05-16 09:43:32.538062] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.594 [2024-05-16 09:43:32.538072] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.594 [2024-05-16 09:43:32.538076] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.594 [2024-05-16 09:43:32.538079] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a370) on tqpair=0x1386ec0 00:30:39.594 [2024-05-16 09:43:32.538095] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.594 [2024-05-16 09:43:32.538099] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1386ec0) 00:30:39.594 [2024-05-16 09:43:32.538106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.594 [2024-05-16 09:43:32.538121] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a370, cid 4, qid 0 00:30:39.594 [2024-05-16 09:43:32.538206] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:39.594 [2024-05-16 09:43:32.538212] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:39.594 [2024-05-16 09:43:32.538216] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:39.594 [2024-05-16 09:43:32.538219] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1386ec0): datao=0, datal=3072, cccid=4 00:30:39.594 [2024-05-16 09:43:32.538226] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140a370) on tqpair(0x1386ec0): expected_datao=0, payload_size=3072 00:30:39.594 [2024-05-16 09:43:32.538230] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.594 [2024-05-16 09:43:32.538237] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:39.594 [2024-05-16 09:43:32.538240] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:39.594 [2024-05-16 09:43:32.538332] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.594 [2024-05-16 09:43:32.538338] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.594 [2024-05-16 09:43:32.538341] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.594 [2024-05-16 09:43:32.538345] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a370) on tqpair=0x1386ec0 00:30:39.594 [2024-05-16 09:43:32.538354] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.594 [2024-05-16 09:43:32.538358] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1386ec0) 00:30:39.594 [2024-05-16 09:43:32.538364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.594 [2024-05-16 09:43:32.538377] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a370, cid 4, qid 0 00:30:39.594 [2024-05-16 09:43:32.538458] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:39.594 [2024-05-16 09:43:32.538464] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:39.594 [2024-05-16 09:43:32.538468] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:39.594 [2024-05-16 09:43:32.538471] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1386ec0): datao=0, datal=8, cccid=4 00:30:39.594 [2024-05-16 09:43:32.538475] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140a370) on tqpair(0x1386ec0): expected_datao=0, payload_size=8 00:30:39.594 [2024-05-16 09:43:32.538480] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.594 [2024-05-16 09:43:32.538486] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:39.594 [2024-05-16 09:43:32.538489] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:39.594 [2024-05-16 09:43:32.580126] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.594 [2024-05-16 09:43:32.580135] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.594 [2024-05-16 09:43:32.580138] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.594 [2024-05-16 09:43:32.580142] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a370) on tqpair=0x1386ec0 00:30:39.594 ===================================================== 00:30:39.594 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:39.594 ===================================================== 00:30:39.594 Controller Capabilities/Features 00:30:39.594 ================================ 00:30:39.594 Vendor ID: 0000 00:30:39.594 Subsystem Vendor ID: 0000 00:30:39.594 Serial Number: .................... 00:30:39.594 Model Number: ........................................ 00:30:39.594 Firmware Version: 24.05 00:30:39.594 Recommended Arb Burst: 0 00:30:39.594 IEEE OUI Identifier: 00 00 00 00:30:39.594 Multi-path I/O 00:30:39.594 May have multiple subsystem ports: No 00:30:39.594 May have multiple controllers: No 00:30:39.594 Associated with SR-IOV VF: No 00:30:39.594 Max Data Transfer Size: 131072 00:30:39.594 Max Number of Namespaces: 0 00:30:39.594 Max Number of I/O Queues: 1024 00:30:39.594 NVMe Specification Version (VS): 1.3 00:30:39.594 NVMe Specification Version (Identify): 1.3 00:30:39.594 Maximum Queue Entries: 128 00:30:39.594 Contiguous Queues Required: Yes 00:30:39.594 Arbitration Mechanisms Supported 00:30:39.594 Weighted Round Robin: Not Supported 00:30:39.594 Vendor Specific: Not Supported 00:30:39.594 Reset Timeout: 15000 ms 00:30:39.594 Doorbell Stride: 4 bytes 00:30:39.594 NVM Subsystem Reset: Not Supported 00:30:39.594 Command Sets Supported 00:30:39.594 NVM Command Set: Supported 00:30:39.594 Boot Partition: Not Supported 00:30:39.594 Memory Page Size Minimum: 4096 bytes 00:30:39.594 Memory Page Size Maximum: 4096 bytes 00:30:39.594 Persistent Memory Region: Not Supported 00:30:39.594 Optional Asynchronous Events Supported 00:30:39.594 Namespace Attribute Notices: Not Supported 00:30:39.594 Firmware Activation Notices: Not Supported 00:30:39.594 ANA Change Notices: Not Supported 00:30:39.594 PLE Aggregate Log Change Notices: Not Supported 00:30:39.594 LBA Status Info Alert Notices: Not Supported 00:30:39.594 EGE Aggregate Log Change Notices: Not Supported 00:30:39.594 Normal NVM Subsystem Shutdown event: Not Supported 00:30:39.594 Zone Descriptor Change Notices: Not Supported 00:30:39.594 Discovery Log Change Notices: Supported 00:30:39.594 Controller Attributes 00:30:39.594 128-bit Host Identifier: Not Supported 00:30:39.594 Non-Operational Permissive Mode: Not Supported 00:30:39.594 NVM Sets: Not Supported 00:30:39.594 Read Recovery Levels: Not Supported 00:30:39.594 Endurance Groups: Not Supported 00:30:39.594 Predictable Latency Mode: Not Supported 00:30:39.594 Traffic Based Keep ALive: Not Supported 00:30:39.594 Namespace Granularity: Not Supported 00:30:39.594 SQ Associations: Not Supported 00:30:39.594 UUID List: Not Supported 00:30:39.594 Multi-Domain Subsystem: Not Supported 00:30:39.595 Fixed Capacity Management: Not Supported 00:30:39.595 Variable Capacity Management: Not Supported 00:30:39.595 Delete Endurance Group: Not Supported 00:30:39.595 Delete NVM Set: Not Supported 00:30:39.595 Extended LBA Formats Supported: Not Supported 00:30:39.595 Flexible Data Placement Supported: Not Supported 00:30:39.595 00:30:39.595 Controller Memory Buffer Support 00:30:39.595 ================================ 00:30:39.595 Supported: No 00:30:39.595 00:30:39.595 Persistent Memory Region Support 00:30:39.595 ================================ 00:30:39.595 Supported: No 00:30:39.595 00:30:39.595 Admin Command Set Attributes 00:30:39.595 ============================ 00:30:39.595 Security Send/Receive: Not Supported 00:30:39.595 Format NVM: Not Supported 00:30:39.595 Firmware Activate/Download: Not Supported 00:30:39.595 Namespace Management: Not Supported 00:30:39.595 Device Self-Test: Not Supported 00:30:39.595 Directives: Not Supported 00:30:39.595 NVMe-MI: Not Supported 00:30:39.595 Virtualization Management: Not Supported 00:30:39.595 Doorbell Buffer Config: Not Supported 00:30:39.595 Get LBA Status Capability: Not Supported 00:30:39.595 Command & Feature Lockdown Capability: Not Supported 00:30:39.595 Abort Command Limit: 1 00:30:39.595 Async Event Request Limit: 4 00:30:39.595 Number of Firmware Slots: N/A 00:30:39.595 Firmware Slot 1 Read-Only: N/A 00:30:39.595 Firmware Activation Without Reset: N/A 00:30:39.595 Multiple Update Detection Support: N/A 00:30:39.595 Firmware Update Granularity: No Information Provided 00:30:39.595 Per-Namespace SMART Log: No 00:30:39.595 Asymmetric Namespace Access Log Page: Not Supported 00:30:39.595 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:39.595 Command Effects Log Page: Not Supported 00:30:39.595 Get Log Page Extended Data: Supported 00:30:39.595 Telemetry Log Pages: Not Supported 00:30:39.595 Persistent Event Log Pages: Not Supported 00:30:39.595 Supported Log Pages Log Page: May Support 00:30:39.595 Commands Supported & Effects Log Page: Not Supported 00:30:39.595 Feature Identifiers & Effects Log Page:May Support 00:30:39.595 NVMe-MI Commands & Effects Log Page: May Support 00:30:39.595 Data Area 4 for Telemetry Log: Not Supported 00:30:39.595 Error Log Page Entries Supported: 128 00:30:39.595 Keep Alive: Not Supported 00:30:39.595 00:30:39.595 NVM Command Set Attributes 00:30:39.595 ========================== 00:30:39.595 Submission Queue Entry Size 00:30:39.595 Max: 1 00:30:39.595 Min: 1 00:30:39.595 Completion Queue Entry Size 00:30:39.595 Max: 1 00:30:39.595 Min: 1 00:30:39.595 Number of Namespaces: 0 00:30:39.595 Compare Command: Not Supported 00:30:39.595 Write Uncorrectable Command: Not Supported 00:30:39.595 Dataset Management Command: Not Supported 00:30:39.595 Write Zeroes Command: Not Supported 00:30:39.595 Set Features Save Field: Not Supported 00:30:39.595 Reservations: Not Supported 00:30:39.595 Timestamp: Not Supported 00:30:39.595 Copy: Not Supported 00:30:39.595 Volatile Write Cache: Not Present 00:30:39.595 Atomic Write Unit (Normal): 1 00:30:39.595 Atomic Write Unit (PFail): 1 00:30:39.595 Atomic Compare & Write Unit: 1 00:30:39.595 Fused Compare & Write: Supported 00:30:39.595 Scatter-Gather List 00:30:39.595 SGL Command Set: Supported 00:30:39.595 SGL Keyed: Supported 00:30:39.595 SGL Bit Bucket Descriptor: Not Supported 00:30:39.595 SGL Metadata Pointer: Not Supported 00:30:39.595 Oversized SGL: Not Supported 00:30:39.595 SGL Metadata Address: Not Supported 00:30:39.595 SGL Offset: Supported 00:30:39.595 Transport SGL Data Block: Not Supported 00:30:39.595 Replay Protected Memory Block: Not Supported 00:30:39.595 00:30:39.595 Firmware Slot Information 00:30:39.595 ========================= 00:30:39.595 Active slot: 0 00:30:39.595 00:30:39.595 00:30:39.595 Error Log 00:30:39.595 ========= 00:30:39.595 00:30:39.595 Active Namespaces 00:30:39.595 ================= 00:30:39.595 Discovery Log Page 00:30:39.595 ================== 00:30:39.595 Generation Counter: 2 00:30:39.596 Number of Records: 2 00:30:39.596 Record Format: 0 00:30:39.596 00:30:39.596 Discovery Log Entry 0 00:30:39.596 ---------------------- 00:30:39.596 Transport Type: 3 (TCP) 00:30:39.596 Address Family: 1 (IPv4) 00:30:39.596 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:39.596 Entry Flags: 00:30:39.596 Duplicate Returned Information: 1 00:30:39.596 Explicit Persistent Connection Support for Discovery: 1 00:30:39.596 Transport Requirements: 00:30:39.596 Secure Channel: Not Required 00:30:39.596 Port ID: 0 (0x0000) 00:30:39.596 Controller ID: 65535 (0xffff) 00:30:39.596 Admin Max SQ Size: 128 00:30:39.596 Transport Service Identifier: 4420 00:30:39.596 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:39.596 Transport Address: 10.0.0.2 00:30:39.596 Discovery Log Entry 1 00:30:39.596 ---------------------- 00:30:39.596 Transport Type: 3 (TCP) 00:30:39.596 Address Family: 1 (IPv4) 00:30:39.596 Subsystem Type: 2 (NVM Subsystem) 00:30:39.596 Entry Flags: 00:30:39.596 Duplicate Returned Information: 0 00:30:39.596 Explicit Persistent Connection Support for Discovery: 0 00:30:39.596 Transport Requirements: 00:30:39.596 Secure Channel: Not Required 00:30:39.596 Port ID: 0 (0x0000) 00:30:39.596 Controller ID: 65535 (0xffff) 00:30:39.596 Admin Max SQ Size: 128 00:30:39.596 Transport Service Identifier: 4420 00:30:39.596 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:39.596 Transport Address: 10.0.0.2 [2024-05-16 09:43:32.580227] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:30:39.596 [2024-05-16 09:43:32.580240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.596 [2024-05-16 09:43:32.580247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.596 [2024-05-16 09:43:32.580253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.596 [2024-05-16 09:43:32.580259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.596 [2024-05-16 09:43:32.580268] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.580271] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.580275] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1386ec0) 00:30:39.596 [2024-05-16 09:43:32.580282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.596 [2024-05-16 09:43:32.580294] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a210, cid 3, qid 0 00:30:39.596 [2024-05-16 09:43:32.580371] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.596 [2024-05-16 09:43:32.580379] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.596 [2024-05-16 09:43:32.580382] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.580386] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a210) on tqpair=0x1386ec0 00:30:39.596 [2024-05-16 09:43:32.580396] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.580400] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.580403] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1386ec0) 00:30:39.596 [2024-05-16 09:43:32.580410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.596 [2024-05-16 09:43:32.580422] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a210, cid 3, qid 0 00:30:39.596 [2024-05-16 09:43:32.580649] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.596 [2024-05-16 09:43:32.580656] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.596 [2024-05-16 09:43:32.580659] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.580663] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a210) on tqpair=0x1386ec0 00:30:39.596 [2024-05-16 09:43:32.580668] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:30:39.596 [2024-05-16 09:43:32.580673] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:30:39.596 [2024-05-16 09:43:32.580682] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.580686] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.580689] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1386ec0) 00:30:39.596 [2024-05-16 09:43:32.580696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.596 [2024-05-16 09:43:32.580705] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a210, cid 3, qid 0 00:30:39.596 [2024-05-16 09:43:32.580915] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.596 [2024-05-16 09:43:32.580921] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.596 [2024-05-16 09:43:32.580925] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.580928] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a210) on tqpair=0x1386ec0 00:30:39.596 [2024-05-16 09:43:32.580939] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.580943] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.580946] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1386ec0) 00:30:39.596 [2024-05-16 09:43:32.580953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.596 [2024-05-16 09:43:32.580962] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a210, cid 3, qid 0 00:30:39.596 [2024-05-16 09:43:32.581135] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.596 [2024-05-16 09:43:32.581142] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.596 [2024-05-16 09:43:32.581145] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.581149] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a210) on tqpair=0x1386ec0 00:30:39.596 [2024-05-16 09:43:32.581159] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.581163] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.581166] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1386ec0) 00:30:39.596 [2024-05-16 09:43:32.581173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.596 [2024-05-16 09:43:32.581185] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a210, cid 3, qid 0 00:30:39.596 [2024-05-16 09:43:32.581397] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.596 [2024-05-16 09:43:32.581403] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.596 [2024-05-16 09:43:32.581406] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.581410] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a210) on tqpair=0x1386ec0 00:30:39.596 [2024-05-16 09:43:32.581420] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.581424] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.581427] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1386ec0) 00:30:39.596 [2024-05-16 09:43:32.581434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.596 [2024-05-16 09:43:32.581443] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a210, cid 3, qid 0 00:30:39.596 [2024-05-16 09:43:32.581617] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.596 [2024-05-16 09:43:32.581623] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.596 [2024-05-16 09:43:32.581627] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.581630] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a210) on tqpair=0x1386ec0 00:30:39.596 [2024-05-16 09:43:32.581640] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.581644] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.581647] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1386ec0) 00:30:39.596 [2024-05-16 09:43:32.581654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.596 [2024-05-16 09:43:32.581663] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a210, cid 3, qid 0 00:30:39.596 [2024-05-16 09:43:32.581815] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.596 [2024-05-16 09:43:32.581821] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.596 [2024-05-16 09:43:32.581825] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.581828] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a210) on tqpair=0x1386ec0 00:30:39.596 [2024-05-16 09:43:32.581838] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.581842] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.596 [2024-05-16 09:43:32.581846] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1386ec0) 00:30:39.596 [2024-05-16 09:43:32.581852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.596 [2024-05-16 09:43:32.581862] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a210, cid 3, qid 0 00:30:39.596 [2024-05-16 09:43:32.582049] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.596 [2024-05-16 09:43:32.586063] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.597 [2024-05-16 09:43:32.586067] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.586071] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a210) on tqpair=0x1386ec0 00:30:39.597 [2024-05-16 09:43:32.586081] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.586086] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.586089] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1386ec0) 00:30:39.597 [2024-05-16 09:43:32.586096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.597 [2024-05-16 09:43:32.586111] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a210, cid 3, qid 0 00:30:39.597 [2024-05-16 09:43:32.586236] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.597 [2024-05-16 09:43:32.586242] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.597 [2024-05-16 09:43:32.586245] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.586249] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a210) on tqpair=0x1386ec0 00:30:39.597 [2024-05-16 09:43:32.586257] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:30:39.597 00:30:39.597 09:43:32 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:39.597 [2024-05-16 09:43:32.626965] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:30:39.597 [2024-05-16 09:43:32.627006] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444740 ] 00:30:39.597 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.597 [2024-05-16 09:43:32.660596] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:30:39.597 [2024-05-16 09:43:32.660643] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:39.597 [2024-05-16 09:43:32.660648] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:39.597 [2024-05-16 09:43:32.660660] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:39.597 [2024-05-16 09:43:32.660667] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:39.597 [2024-05-16 09:43:32.661010] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:30:39.597 [2024-05-16 09:43:32.661032] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d53ec0 0 00:30:39.597 [2024-05-16 09:43:32.675061] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:39.597 [2024-05-16 09:43:32.675071] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:39.597 [2024-05-16 09:43:32.675075] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:39.597 [2024-05-16 09:43:32.675079] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:39.597 [2024-05-16 09:43:32.675108] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.675114] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.675118] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d53ec0) 00:30:39.597 [2024-05-16 09:43:32.675129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:39.597 [2024-05-16 09:43:32.675144] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd6df0, cid 0, qid 0 00:30:39.597 [2024-05-16 09:43:32.683064] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.597 [2024-05-16 09:43:32.683073] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.597 [2024-05-16 09:43:32.683076] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.683081] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd6df0) on tqpair=0x1d53ec0 00:30:39.597 [2024-05-16 09:43:32.683092] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:39.597 [2024-05-16 09:43:32.683098] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:30:39.597 [2024-05-16 09:43:32.683107] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:30:39.597 [2024-05-16 09:43:32.683118] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.683122] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.683126] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d53ec0) 00:30:39.597 [2024-05-16 09:43:32.683133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.597 [2024-05-16 09:43:32.683145] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd6df0, cid 0, qid 0 00:30:39.597 [2024-05-16 09:43:32.683304] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.597 [2024-05-16 09:43:32.683310] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.597 [2024-05-16 09:43:32.683314] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.683317] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd6df0) on tqpair=0x1d53ec0 00:30:39.597 [2024-05-16 09:43:32.683323] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:30:39.597 [2024-05-16 09:43:32.683330] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:30:39.597 [2024-05-16 09:43:32.683337] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.683341] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.683344] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d53ec0) 00:30:39.597 [2024-05-16 09:43:32.683351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.597 [2024-05-16 09:43:32.683360] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd6df0, cid 0, qid 0 00:30:39.597 [2024-05-16 09:43:32.683561] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.597 [2024-05-16 09:43:32.683568] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.597 [2024-05-16 09:43:32.683571] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.683575] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd6df0) on tqpair=0x1d53ec0 00:30:39.597 [2024-05-16 09:43:32.683581] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:30:39.597 [2024-05-16 09:43:32.683588] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:30:39.597 [2024-05-16 09:43:32.683595] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.683599] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.683602] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d53ec0) 00:30:39.597 [2024-05-16 09:43:32.683609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.597 [2024-05-16 09:43:32.683618] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd6df0, cid 0, qid 0 00:30:39.597 [2024-05-16 09:43:32.683826] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.597 [2024-05-16 09:43:32.683832] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.597 [2024-05-16 09:43:32.683836] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.683839] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd6df0) on tqpair=0x1d53ec0 00:30:39.597 [2024-05-16 09:43:32.683845] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:39.597 [2024-05-16 09:43:32.683853] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.683859] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.683863] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d53ec0) 00:30:39.597 [2024-05-16 09:43:32.683870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.597 [2024-05-16 09:43:32.683879] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd6df0, cid 0, qid 0 00:30:39.597 [2024-05-16 09:43:32.684096] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.597 [2024-05-16 09:43:32.684103] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.597 [2024-05-16 09:43:32.684106] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.684110] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd6df0) on tqpair=0x1d53ec0 00:30:39.597 [2024-05-16 09:43:32.684115] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:30:39.597 [2024-05-16 09:43:32.684120] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:30:39.597 [2024-05-16 09:43:32.684127] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:39.597 [2024-05-16 09:43:32.684232] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:30:39.597 [2024-05-16 09:43:32.684236] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:39.597 [2024-05-16 09:43:32.684243] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.684247] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.597 [2024-05-16 09:43:32.684250] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d53ec0) 00:30:39.597 [2024-05-16 09:43:32.684257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.597 [2024-05-16 09:43:32.684267] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd6df0, cid 0, qid 0 00:30:39.597 [2024-05-16 09:43:32.684440] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.598 [2024-05-16 09:43:32.684446] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.598 [2024-05-16 09:43:32.684450] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.684453] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd6df0) on tqpair=0x1d53ec0 00:30:39.598 [2024-05-16 09:43:32.684459] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:39.598 [2024-05-16 09:43:32.684467] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.684471] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.684475] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d53ec0) 00:30:39.598 [2024-05-16 09:43:32.684481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.598 [2024-05-16 09:43:32.684491] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd6df0, cid 0, qid 0 00:30:39.598 [2024-05-16 09:43:32.684707] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.598 [2024-05-16 09:43:32.684714] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.598 [2024-05-16 09:43:32.684717] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.684720] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd6df0) on tqpair=0x1d53ec0 00:30:39.598 [2024-05-16 09:43:32.684725] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:39.598 [2024-05-16 09:43:32.684730] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:30:39.598 [2024-05-16 09:43:32.684739] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:30:39.598 [2024-05-16 09:43:32.684750] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:30:39.598 [2024-05-16 09:43:32.684758] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.684762] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d53ec0) 00:30:39.598 [2024-05-16 09:43:32.684768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.598 [2024-05-16 09:43:32.684778] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd6df0, cid 0, qid 0 00:30:39.598 [2024-05-16 09:43:32.684994] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:39.598 [2024-05-16 09:43:32.685000] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:39.598 [2024-05-16 09:43:32.685003] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685007] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d53ec0): datao=0, datal=4096, cccid=0 00:30:39.598 [2024-05-16 09:43:32.685011] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd6df0) on tqpair(0x1d53ec0): expected_datao=0, payload_size=4096 00:30:39.598 [2024-05-16 09:43:32.685016] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685033] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685037] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685198] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.598 [2024-05-16 09:43:32.685205] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.598 [2024-05-16 09:43:32.685209] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685213] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd6df0) on tqpair=0x1d53ec0 00:30:39.598 [2024-05-16 09:43:32.685220] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:30:39.598 [2024-05-16 09:43:32.685225] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:30:39.598 [2024-05-16 09:43:32.685229] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:30:39.598 [2024-05-16 09:43:32.685236] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:30:39.598 [2024-05-16 09:43:32.685240] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:30:39.598 [2024-05-16 09:43:32.685245] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:30:39.598 [2024-05-16 09:43:32.685253] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:30:39.598 [2024-05-16 09:43:32.685259] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685263] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685267] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d53ec0) 00:30:39.598 [2024-05-16 09:43:32.685273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:39.598 [2024-05-16 09:43:32.685284] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd6df0, cid 0, qid 0 00:30:39.598 [2024-05-16 09:43:32.685451] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.598 [2024-05-16 09:43:32.685456] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.598 [2024-05-16 09:43:32.685462] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685465] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd6df0) on tqpair=0x1d53ec0 00:30:39.598 [2024-05-16 09:43:32.685473] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685476] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685480] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d53ec0) 00:30:39.598 [2024-05-16 09:43:32.685486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.598 [2024-05-16 09:43:32.685491] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685495] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685499] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d53ec0) 00:30:39.598 [2024-05-16 09:43:32.685504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.598 [2024-05-16 09:43:32.685510] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685514] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685517] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d53ec0) 00:30:39.598 [2024-05-16 09:43:32.685522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.598 [2024-05-16 09:43:32.685528] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685532] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685535] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d53ec0) 00:30:39.598 [2024-05-16 09:43:32.685541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.598 [2024-05-16 09:43:32.685545] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:39.598 [2024-05-16 09:43:32.685555] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:39.598 [2024-05-16 09:43:32.685561] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685564] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d53ec0) 00:30:39.598 [2024-05-16 09:43:32.685571] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.598 [2024-05-16 09:43:32.685582] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd6df0, cid 0, qid 0 00:30:39.598 [2024-05-16 09:43:32.685587] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd6f50, cid 1, qid 0 00:30:39.598 [2024-05-16 09:43:32.685592] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd70b0, cid 2, qid 0 00:30:39.598 [2024-05-16 09:43:32.685597] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7210, cid 3, qid 0 00:30:39.598 [2024-05-16 09:43:32.685601] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7370, cid 4, qid 0 00:30:39.598 [2024-05-16 09:43:32.685811] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.598 [2024-05-16 09:43:32.685817] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.598 [2024-05-16 09:43:32.685820] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685824] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7370) on tqpair=0x1d53ec0 00:30:39.598 [2024-05-16 09:43:32.685829] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:30:39.598 [2024-05-16 09:43:32.685834] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:39.598 [2024-05-16 09:43:32.685843] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:30:39.598 [2024-05-16 09:43:32.685849] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:39.598 [2024-05-16 09:43:32.685855] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685859] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.598 [2024-05-16 09:43:32.685862] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d53ec0) 00:30:39.598 [2024-05-16 09:43:32.685869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:39.598 [2024-05-16 09:43:32.685878] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7370, cid 4, qid 0 00:30:39.598 [2024-05-16 09:43:32.686062] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.598 [2024-05-16 09:43:32.686068] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.598 [2024-05-16 09:43:32.686072] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.686075] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7370) on tqpair=0x1d53ec0 00:30:39.599 [2024-05-16 09:43:32.686130] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:30:39.599 [2024-05-16 09:43:32.686138] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:39.599 [2024-05-16 09:43:32.686146] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.686149] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d53ec0) 00:30:39.599 [2024-05-16 09:43:32.686156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.599 [2024-05-16 09:43:32.686165] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7370, cid 4, qid 0 00:30:39.599 [2024-05-16 09:43:32.686373] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:39.599 [2024-05-16 09:43:32.686379] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:39.599 [2024-05-16 09:43:32.686382] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.686386] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d53ec0): datao=0, datal=4096, cccid=4 00:30:39.599 [2024-05-16 09:43:32.686390] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd7370) on tqpair(0x1d53ec0): expected_datao=0, payload_size=4096 00:30:39.599 [2024-05-16 09:43:32.686395] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.686407] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.686411] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.728222] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.599 [2024-05-16 09:43:32.728242] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.599 [2024-05-16 09:43:32.728246] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.728250] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7370) on tqpair=0x1d53ec0 00:30:39.599 [2024-05-16 09:43:32.728264] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:30:39.599 [2024-05-16 09:43:32.728278] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:30:39.599 [2024-05-16 09:43:32.728287] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:30:39.599 [2024-05-16 09:43:32.728295] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.728301] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d53ec0) 00:30:39.599 [2024-05-16 09:43:32.728310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.599 [2024-05-16 09:43:32.728324] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7370, cid 4, qid 0 00:30:39.599 [2024-05-16 09:43:32.728521] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:39.599 [2024-05-16 09:43:32.728527] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:39.599 [2024-05-16 09:43:32.728530] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.728534] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d53ec0): datao=0, datal=4096, cccid=4 00:30:39.599 [2024-05-16 09:43:32.728538] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd7370) on tqpair(0x1d53ec0): expected_datao=0, payload_size=4096 00:30:39.599 [2024-05-16 09:43:32.728543] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.728592] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.728596] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.728783] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.599 [2024-05-16 09:43:32.728790] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.599 [2024-05-16 09:43:32.728793] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.728797] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7370) on tqpair=0x1d53ec0 00:30:39.599 [2024-05-16 09:43:32.728809] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:39.599 [2024-05-16 09:43:32.728817] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:39.599 [2024-05-16 09:43:32.728824] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.728828] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d53ec0) 00:30:39.599 [2024-05-16 09:43:32.728835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.599 [2024-05-16 09:43:32.728845] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7370, cid 4, qid 0 00:30:39.599 [2024-05-16 09:43:32.733058] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:39.599 [2024-05-16 09:43:32.733066] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:39.599 [2024-05-16 09:43:32.733070] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.733073] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d53ec0): datao=0, datal=4096, cccid=4 00:30:39.599 [2024-05-16 09:43:32.733078] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd7370) on tqpair(0x1d53ec0): expected_datao=0, payload_size=4096 00:30:39.599 [2024-05-16 09:43:32.733082] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.733088] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.733092] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.772062] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.599 [2024-05-16 09:43:32.772071] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.599 [2024-05-16 09:43:32.772074] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.772078] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7370) on tqpair=0x1d53ec0 00:30:39.599 [2024-05-16 09:43:32.772086] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:39.599 [2024-05-16 09:43:32.772096] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:30:39.599 [2024-05-16 09:43:32.772104] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:30:39.599 [2024-05-16 09:43:32.772111] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:39.599 [2024-05-16 09:43:32.772116] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:30:39.599 [2024-05-16 09:43:32.772120] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:30:39.599 [2024-05-16 09:43:32.772125] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:30:39.599 [2024-05-16 09:43:32.772130] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:30:39.599 [2024-05-16 09:43:32.772145] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.772149] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d53ec0) 00:30:39.599 [2024-05-16 09:43:32.772156] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.599 [2024-05-16 09:43:32.772162] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.772166] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.772169] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d53ec0) 00:30:39.599 [2024-05-16 09:43:32.772175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.599 [2024-05-16 09:43:32.772189] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7370, cid 4, qid 0 00:30:39.599 [2024-05-16 09:43:32.772194] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd74d0, cid 5, qid 0 00:30:39.599 [2024-05-16 09:43:32.772279] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.599 [2024-05-16 09:43:32.772285] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.599 [2024-05-16 09:43:32.772288] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.772292] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7370) on tqpair=0x1d53ec0 00:30:39.599 [2024-05-16 09:43:32.772299] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.599 [2024-05-16 09:43:32.772305] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.599 [2024-05-16 09:43:32.772308] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.772312] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd74d0) on tqpair=0x1d53ec0 00:30:39.599 [2024-05-16 09:43:32.772321] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.772325] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d53ec0) 00:30:39.599 [2024-05-16 09:43:32.772331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.599 [2024-05-16 09:43:32.772341] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd74d0, cid 5, qid 0 00:30:39.599 [2024-05-16 09:43:32.772514] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.599 [2024-05-16 09:43:32.772520] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.599 [2024-05-16 09:43:32.772524] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.772527] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd74d0) on tqpair=0x1d53ec0 00:30:39.599 [2024-05-16 09:43:32.772537] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.599 [2024-05-16 09:43:32.772542] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d53ec0) 00:30:39.599 [2024-05-16 09:43:32.772549] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.600 [2024-05-16 09:43:32.772558] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd74d0, cid 5, qid 0 00:30:39.600 [2024-05-16 09:43:32.772702] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.600 [2024-05-16 09:43:32.772708] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.600 [2024-05-16 09:43:32.772711] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.772715] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd74d0) on tqpair=0x1d53ec0 00:30:39.600 [2024-05-16 09:43:32.772724] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.772728] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d53ec0) 00:30:39.600 [2024-05-16 09:43:32.772734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.600 [2024-05-16 09:43:32.772743] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd74d0, cid 5, qid 0 00:30:39.600 [2024-05-16 09:43:32.772929] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.600 [2024-05-16 09:43:32.772935] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.600 [2024-05-16 09:43:32.772938] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.772942] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd74d0) on tqpair=0x1d53ec0 00:30:39.600 [2024-05-16 09:43:32.772954] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.772958] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d53ec0) 00:30:39.600 [2024-05-16 09:43:32.772964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.600 [2024-05-16 09:43:32.772971] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.772975] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d53ec0) 00:30:39.600 [2024-05-16 09:43:32.772981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.600 [2024-05-16 09:43:32.772988] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.772991] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d53ec0) 00:30:39.600 [2024-05-16 09:43:32.772997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.600 [2024-05-16 09:43:32.773004] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773008] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d53ec0) 00:30:39.600 [2024-05-16 09:43:32.773014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.600 [2024-05-16 09:43:32.773024] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd74d0, cid 5, qid 0 00:30:39.600 [2024-05-16 09:43:32.773029] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7370, cid 4, qid 0 00:30:39.600 [2024-05-16 09:43:32.773034] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7630, cid 6, qid 0 00:30:39.600 [2024-05-16 09:43:32.773039] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7790, cid 7, qid 0 00:30:39.600 [2024-05-16 09:43:32.773292] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:39.600 [2024-05-16 09:43:32.773299] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:39.600 [2024-05-16 09:43:32.773304] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773308] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d53ec0): datao=0, datal=8192, cccid=5 00:30:39.600 [2024-05-16 09:43:32.773312] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd74d0) on tqpair(0x1d53ec0): expected_datao=0, payload_size=8192 00:30:39.600 [2024-05-16 09:43:32.773316] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773380] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773384] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773390] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:39.600 [2024-05-16 09:43:32.773395] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:39.600 [2024-05-16 09:43:32.773399] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773402] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d53ec0): datao=0, datal=512, cccid=4 00:30:39.600 [2024-05-16 09:43:32.773406] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd7370) on tqpair(0x1d53ec0): expected_datao=0, payload_size=512 00:30:39.600 [2024-05-16 09:43:32.773410] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773428] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773432] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773437] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:39.600 [2024-05-16 09:43:32.773443] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:39.600 [2024-05-16 09:43:32.773446] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773450] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d53ec0): datao=0, datal=512, cccid=6 00:30:39.600 [2024-05-16 09:43:32.773454] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd7630) on tqpair(0x1d53ec0): expected_datao=0, payload_size=512 00:30:39.600 [2024-05-16 09:43:32.773458] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773464] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773468] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773473] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:39.600 [2024-05-16 09:43:32.773479] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:39.600 [2024-05-16 09:43:32.773482] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773485] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d53ec0): datao=0, datal=4096, cccid=7 00:30:39.600 [2024-05-16 09:43:32.773490] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd7790) on tqpair(0x1d53ec0): expected_datao=0, payload_size=4096 00:30:39.600 [2024-05-16 09:43:32.773494] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773500] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773504] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773513] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.600 [2024-05-16 09:43:32.773519] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.600 [2024-05-16 09:43:32.773522] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773526] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd74d0) on tqpair=0x1d53ec0 00:30:39.600 [2024-05-16 09:43:32.773539] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.600 [2024-05-16 09:43:32.773544] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.600 [2024-05-16 09:43:32.773548] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773551] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7370) on tqpair=0x1d53ec0 00:30:39.600 [2024-05-16 09:43:32.773562] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.600 [2024-05-16 09:43:32.773568] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.600 [2024-05-16 09:43:32.773571] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773575] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7630) on tqpair=0x1d53ec0 00:30:39.600 [2024-05-16 09:43:32.773584] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.600 [2024-05-16 09:43:32.773590] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.600 [2024-05-16 09:43:32.773593] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.600 [2024-05-16 09:43:32.773597] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7790) on tqpair=0x1d53ec0 00:30:39.600 ===================================================== 00:30:39.600 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:39.600 ===================================================== 00:30:39.600 Controller Capabilities/Features 00:30:39.600 ================================ 00:30:39.600 Vendor ID: 8086 00:30:39.600 Subsystem Vendor ID: 8086 00:30:39.600 Serial Number: SPDK00000000000001 00:30:39.600 Model Number: SPDK bdev Controller 00:30:39.600 Firmware Version: 24.05 00:30:39.600 Recommended Arb Burst: 6 00:30:39.600 IEEE OUI Identifier: e4 d2 5c 00:30:39.600 Multi-path I/O 00:30:39.601 May have multiple subsystem ports: Yes 00:30:39.601 May have multiple controllers: Yes 00:30:39.601 Associated with SR-IOV VF: No 00:30:39.601 Max Data Transfer Size: 131072 00:30:39.601 Max Number of Namespaces: 32 00:30:39.601 Max Number of I/O Queues: 127 00:30:39.601 NVMe Specification Version (VS): 1.3 00:30:39.601 NVMe Specification Version (Identify): 1.3 00:30:39.601 Maximum Queue Entries: 128 00:30:39.601 Contiguous Queues Required: Yes 00:30:39.601 Arbitration Mechanisms Supported 00:30:39.601 Weighted Round Robin: Not Supported 00:30:39.601 Vendor Specific: Not Supported 00:30:39.601 Reset Timeout: 15000 ms 00:30:39.601 Doorbell Stride: 4 bytes 00:30:39.601 NVM Subsystem Reset: Not Supported 00:30:39.601 Command Sets Supported 00:30:39.601 NVM Command Set: Supported 00:30:39.601 Boot Partition: Not Supported 00:30:39.601 Memory Page Size Minimum: 4096 bytes 00:30:39.601 Memory Page Size Maximum: 4096 bytes 00:30:39.601 Persistent Memory Region: Not Supported 00:30:39.601 Optional Asynchronous Events Supported 00:30:39.601 Namespace Attribute Notices: Supported 00:30:39.601 Firmware Activation Notices: Not Supported 00:30:39.601 ANA Change Notices: Not Supported 00:30:39.601 PLE Aggregate Log Change Notices: Not Supported 00:30:39.601 LBA Status Info Alert Notices: Not Supported 00:30:39.601 EGE Aggregate Log Change Notices: Not Supported 00:30:39.601 Normal NVM Subsystem Shutdown event: Not Supported 00:30:39.601 Zone Descriptor Change Notices: Not Supported 00:30:39.601 Discovery Log Change Notices: Not Supported 00:30:39.601 Controller Attributes 00:30:39.601 128-bit Host Identifier: Supported 00:30:39.601 Non-Operational Permissive Mode: Not Supported 00:30:39.601 NVM Sets: Not Supported 00:30:39.601 Read Recovery Levels: Not Supported 00:30:39.601 Endurance Groups: Not Supported 00:30:39.601 Predictable Latency Mode: Not Supported 00:30:39.601 Traffic Based Keep ALive: Not Supported 00:30:39.601 Namespace Granularity: Not Supported 00:30:39.601 SQ Associations: Not Supported 00:30:39.601 UUID List: Not Supported 00:30:39.601 Multi-Domain Subsystem: Not Supported 00:30:39.601 Fixed Capacity Management: Not Supported 00:30:39.601 Variable Capacity Management: Not Supported 00:30:39.601 Delete Endurance Group: Not Supported 00:30:39.601 Delete NVM Set: Not Supported 00:30:39.601 Extended LBA Formats Supported: Not Supported 00:30:39.601 Flexible Data Placement Supported: Not Supported 00:30:39.601 00:30:39.601 Controller Memory Buffer Support 00:30:39.601 ================================ 00:30:39.601 Supported: No 00:30:39.601 00:30:39.601 Persistent Memory Region Support 00:30:39.601 ================================ 00:30:39.601 Supported: No 00:30:39.601 00:30:39.601 Admin Command Set Attributes 00:30:39.601 ============================ 00:30:39.601 Security Send/Receive: Not Supported 00:30:39.601 Format NVM: Not Supported 00:30:39.601 Firmware Activate/Download: Not Supported 00:30:39.601 Namespace Management: Not Supported 00:30:39.601 Device Self-Test: Not Supported 00:30:39.601 Directives: Not Supported 00:30:39.601 NVMe-MI: Not Supported 00:30:39.601 Virtualization Management: Not Supported 00:30:39.601 Doorbell Buffer Config: Not Supported 00:30:39.601 Get LBA Status Capability: Not Supported 00:30:39.601 Command & Feature Lockdown Capability: Not Supported 00:30:39.601 Abort Command Limit: 4 00:30:39.601 Async Event Request Limit: 4 00:30:39.601 Number of Firmware Slots: N/A 00:30:39.601 Firmware Slot 1 Read-Only: N/A 00:30:39.601 Firmware Activation Without Reset: N/A 00:30:39.601 Multiple Update Detection Support: N/A 00:30:39.601 Firmware Update Granularity: No Information Provided 00:30:39.601 Per-Namespace SMART Log: No 00:30:39.601 Asymmetric Namespace Access Log Page: Not Supported 00:30:39.601 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:39.601 Command Effects Log Page: Supported 00:30:39.601 Get Log Page Extended Data: Supported 00:30:39.601 Telemetry Log Pages: Not Supported 00:30:39.601 Persistent Event Log Pages: Not Supported 00:30:39.601 Supported Log Pages Log Page: May Support 00:30:39.601 Commands Supported & Effects Log Page: Not Supported 00:30:39.601 Feature Identifiers & Effects Log Page:May Support 00:30:39.601 NVMe-MI Commands & Effects Log Page: May Support 00:30:39.601 Data Area 4 for Telemetry Log: Not Supported 00:30:39.601 Error Log Page Entries Supported: 128 00:30:39.601 Keep Alive: Supported 00:30:39.601 Keep Alive Granularity: 10000 ms 00:30:39.601 00:30:39.601 NVM Command Set Attributes 00:30:39.601 ========================== 00:30:39.601 Submission Queue Entry Size 00:30:39.601 Max: 64 00:30:39.601 Min: 64 00:30:39.601 Completion Queue Entry Size 00:30:39.601 Max: 16 00:30:39.601 Min: 16 00:30:39.601 Number of Namespaces: 32 00:30:39.601 Compare Command: Supported 00:30:39.601 Write Uncorrectable Command: Not Supported 00:30:39.601 Dataset Management Command: Supported 00:30:39.601 Write Zeroes Command: Supported 00:30:39.601 Set Features Save Field: Not Supported 00:30:39.601 Reservations: Supported 00:30:39.601 Timestamp: Not Supported 00:30:39.601 Copy: Supported 00:30:39.601 Volatile Write Cache: Present 00:30:39.601 Atomic Write Unit (Normal): 1 00:30:39.601 Atomic Write Unit (PFail): 1 00:30:39.601 Atomic Compare & Write Unit: 1 00:30:39.601 Fused Compare & Write: Supported 00:30:39.601 Scatter-Gather List 00:30:39.601 SGL Command Set: Supported 00:30:39.601 SGL Keyed: Supported 00:30:39.601 SGL Bit Bucket Descriptor: Not Supported 00:30:39.601 SGL Metadata Pointer: Not Supported 00:30:39.601 Oversized SGL: Not Supported 00:30:39.601 SGL Metadata Address: Not Supported 00:30:39.601 SGL Offset: Supported 00:30:39.601 Transport SGL Data Block: Not Supported 00:30:39.601 Replay Protected Memory Block: Not Supported 00:30:39.601 00:30:39.601 Firmware Slot Information 00:30:39.601 ========================= 00:30:39.601 Active slot: 1 00:30:39.601 Slot 1 Firmware Revision: 24.05 00:30:39.601 00:30:39.601 00:30:39.601 Commands Supported and Effects 00:30:39.601 ============================== 00:30:39.601 Admin Commands 00:30:39.601 -------------- 00:30:39.601 Get Log Page (02h): Supported 00:30:39.601 Identify (06h): Supported 00:30:39.601 Abort (08h): Supported 00:30:39.601 Set Features (09h): Supported 00:30:39.601 Get Features (0Ah): Supported 00:30:39.601 Asynchronous Event Request (0Ch): Supported 00:30:39.601 Keep Alive (18h): Supported 00:30:39.601 I/O Commands 00:30:39.601 ------------ 00:30:39.601 Flush (00h): Supported LBA-Change 00:30:39.601 Write (01h): Supported LBA-Change 00:30:39.601 Read (02h): Supported 00:30:39.601 Compare (05h): Supported 00:30:39.601 Write Zeroes (08h): Supported LBA-Change 00:30:39.601 Dataset Management (09h): Supported LBA-Change 00:30:39.601 Copy (19h): Supported LBA-Change 00:30:39.601 Unknown (79h): Supported LBA-Change 00:30:39.601 Unknown (7Ah): Supported 00:30:39.601 00:30:39.601 Error Log 00:30:39.601 ========= 00:30:39.601 00:30:39.601 Arbitration 00:30:39.601 =========== 00:30:39.601 Arbitration Burst: 1 00:30:39.601 00:30:39.601 Power Management 00:30:39.601 ================ 00:30:39.601 Number of Power States: 1 00:30:39.601 Current Power State: Power State #0 00:30:39.601 Power State #0: 00:30:39.601 Max Power: 0.00 W 00:30:39.601 Non-Operational State: Operational 00:30:39.601 Entry Latency: Not Reported 00:30:39.601 Exit Latency: Not Reported 00:30:39.601 Relative Read Throughput: 0 00:30:39.601 Relative Read Latency: 0 00:30:39.601 Relative Write Throughput: 0 00:30:39.601 Relative Write Latency: 0 00:30:39.601 Idle Power: Not Reported 00:30:39.601 Active Power: Not Reported 00:30:39.601 Non-Operational Permissive Mode: Not Supported 00:30:39.601 00:30:39.601 Health Information 00:30:39.601 ================== 00:30:39.601 Critical Warnings: 00:30:39.601 Available Spare Space: OK 00:30:39.601 Temperature: OK 00:30:39.601 Device Reliability: OK 00:30:39.601 Read Only: No 00:30:39.601 Volatile Memory Backup: OK 00:30:39.601 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:39.601 Temperature Threshold: [2024-05-16 09:43:32.773695] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.773701] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d53ec0) 00:30:39.602 [2024-05-16 09:43:32.773708] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.602 [2024-05-16 09:43:32.773718] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7790, cid 7, qid 0 00:30:39.602 [2024-05-16 09:43:32.773910] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.602 [2024-05-16 09:43:32.773916] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.602 [2024-05-16 09:43:32.773919] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.773923] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7790) on tqpair=0x1d53ec0 00:30:39.602 [2024-05-16 09:43:32.773950] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:30:39.602 [2024-05-16 09:43:32.773961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.602 [2024-05-16 09:43:32.773968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.602 [2024-05-16 09:43:32.773974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.602 [2024-05-16 09:43:32.773980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.602 [2024-05-16 09:43:32.773988] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.773992] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.773995] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d53ec0) 00:30:39.602 [2024-05-16 09:43:32.774002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.602 [2024-05-16 09:43:32.774013] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7210, cid 3, qid 0 00:30:39.602 [2024-05-16 09:43:32.774200] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.602 [2024-05-16 09:43:32.774207] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.602 [2024-05-16 09:43:32.774210] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.774214] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7210) on tqpair=0x1d53ec0 00:30:39.602 [2024-05-16 09:43:32.774222] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.774225] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.774229] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d53ec0) 00:30:39.602 [2024-05-16 09:43:32.774235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.602 [2024-05-16 09:43:32.774247] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7210, cid 3, qid 0 00:30:39.602 [2024-05-16 09:43:32.774468] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.602 [2024-05-16 09:43:32.774474] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.602 [2024-05-16 09:43:32.774477] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.774481] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7210) on tqpair=0x1d53ec0 00:30:39.602 [2024-05-16 09:43:32.774486] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:30:39.602 [2024-05-16 09:43:32.774490] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:30:39.602 [2024-05-16 09:43:32.774500] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.774504] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.774507] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d53ec0) 00:30:39.602 [2024-05-16 09:43:32.774514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.602 [2024-05-16 09:43:32.774523] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7210, cid 3, qid 0 00:30:39.602 [2024-05-16 09:43:32.774684] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.602 [2024-05-16 09:43:32.774690] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.602 [2024-05-16 09:43:32.774694] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.774697] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7210) on tqpair=0x1d53ec0 00:30:39.602 [2024-05-16 09:43:32.774708] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.774712] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.774715] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d53ec0) 00:30:39.602 [2024-05-16 09:43:32.774722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.602 [2024-05-16 09:43:32.774731] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7210, cid 3, qid 0 00:30:39.602 [2024-05-16 09:43:32.774954] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.602 [2024-05-16 09:43:32.774960] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.602 [2024-05-16 09:43:32.774963] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.774967] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7210) on tqpair=0x1d53ec0 00:30:39.602 [2024-05-16 09:43:32.774977] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.774981] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.774985] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d53ec0) 00:30:39.602 [2024-05-16 09:43:32.774991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.602 [2024-05-16 09:43:32.775000] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7210, cid 3, qid 0 00:30:39.602 [2024-05-16 09:43:32.775202] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.602 [2024-05-16 09:43:32.775208] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.602 [2024-05-16 09:43:32.775211] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.775215] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7210) on tqpair=0x1d53ec0 00:30:39.602 [2024-05-16 09:43:32.775225] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.775229] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.775232] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d53ec0) 00:30:39.602 [2024-05-16 09:43:32.775239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.602 [2024-05-16 09:43:32.775250] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7210, cid 3, qid 0 00:30:39.602 [2024-05-16 09:43:32.775469] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.602 [2024-05-16 09:43:32.775475] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.602 [2024-05-16 09:43:32.775478] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.775482] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7210) on tqpair=0x1d53ec0 00:30:39.602 [2024-05-16 09:43:32.775492] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.775496] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.775499] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d53ec0) 00:30:39.602 [2024-05-16 09:43:32.775506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.602 [2024-05-16 09:43:32.775515] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7210, cid 3, qid 0 00:30:39.602 [2024-05-16 09:43:32.775738] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.602 [2024-05-16 09:43:32.775744] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.602 [2024-05-16 09:43:32.775747] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.775751] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7210) on tqpair=0x1d53ec0 00:30:39.602 [2024-05-16 09:43:32.775761] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.775765] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.775768] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d53ec0) 00:30:39.602 [2024-05-16 09:43:32.775775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.602 [2024-05-16 09:43:32.775784] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7210, cid 3, qid 0 00:30:39.602 [2024-05-16 09:43:32.776006] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.602 [2024-05-16 09:43:32.776012] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.602 [2024-05-16 09:43:32.776016] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.776019] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7210) on tqpair=0x1d53ec0 00:30:39.602 [2024-05-16 09:43:32.776029] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:39.602 [2024-05-16 09:43:32.776033] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:39.603 [2024-05-16 09:43:32.776036] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d53ec0) 00:30:39.603 [2024-05-16 09:43:32.776043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.603 [2024-05-16 09:43:32.780056] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd7210, cid 3, qid 0 00:30:39.603 [2024-05-16 09:43:32.780066] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:39.603 [2024-05-16 09:43:32.780072] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:39.603 [2024-05-16 09:43:32.780075] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:39.603 [2024-05-16 09:43:32.780079] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd7210) on tqpair=0x1d53ec0 00:30:39.603 [2024-05-16 09:43:32.780087] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:30:39.603 0 Kelvin (-273 Celsius) 00:30:39.603 Available Spare: 0% 00:30:39.603 Available Spare Threshold: 0% 00:30:39.603 Life Percentage Used: 0% 00:30:39.603 Data Units Read: 0 00:30:39.603 Data Units Written: 0 00:30:39.603 Host Read Commands: 0 00:30:39.603 Host Write Commands: 0 00:30:39.603 Controller Busy Time: 0 minutes 00:30:39.603 Power Cycles: 0 00:30:39.603 Power On Hours: 0 hours 00:30:39.603 Unsafe Shutdowns: 0 00:30:39.603 Unrecoverable Media Errors: 0 00:30:39.603 Lifetime Error Log Entries: 0 00:30:39.603 Warning Temperature Time: 0 minutes 00:30:39.603 Critical Temperature Time: 0 minutes 00:30:39.603 00:30:39.603 Number of Queues 00:30:39.603 ================ 00:30:39.603 Number of I/O Submission Queues: 127 00:30:39.603 Number of I/O Completion Queues: 127 00:30:39.603 00:30:39.603 Active Namespaces 00:30:39.603 ================= 00:30:39.603 Namespace ID:1 00:30:39.603 Error Recovery Timeout: Unlimited 00:30:39.603 Command Set Identifier: NVM (00h) 00:30:39.603 Deallocate: Supported 00:30:39.603 Deallocated/Unwritten Error: Not Supported 00:30:39.603 Deallocated Read Value: Unknown 00:30:39.603 Deallocate in Write Zeroes: Not Supported 00:30:39.603 Deallocated Guard Field: 0xFFFF 00:30:39.603 Flush: Supported 00:30:39.603 Reservation: Supported 00:30:39.603 Namespace Sharing Capabilities: Multiple Controllers 00:30:39.603 Size (in LBAs): 131072 (0GiB) 00:30:39.603 Capacity (in LBAs): 131072 (0GiB) 00:30:39.603 Utilization (in LBAs): 131072 (0GiB) 00:30:39.603 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:39.603 EUI64: ABCDEF0123456789 00:30:39.603 UUID: b09ea1da-014a-45e4-9b37-f9ffbba7c499 00:30:39.603 Thin Provisioning: Not Supported 00:30:39.603 Per-NS Atomic Units: Yes 00:30:39.603 Atomic Boundary Size (Normal): 0 00:30:39.603 Atomic Boundary Size (PFail): 0 00:30:39.603 Atomic Boundary Offset: 0 00:30:39.603 Maximum Single Source Range Length: 65535 00:30:39.603 Maximum Copy Length: 65535 00:30:39.603 Maximum Source Range Count: 1 00:30:39.603 NGUID/EUI64 Never Reused: No 00:30:39.603 Namespace Write Protected: No 00:30:39.603 Number of LBA Formats: 1 00:30:39.603 Current LBA Format: LBA Format #00 00:30:39.603 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:39.603 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:39.603 rmmod nvme_tcp 00:30:39.603 rmmod nvme_fabrics 00:30:39.603 rmmod nvme_keyring 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 444442 ']' 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 444442 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 444442 ']' 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 444442 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 444442 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 444442' 00:30:39.603 killing process with pid 444442 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 444442 00:30:39.603 [2024-05-16 09:43:32.920134] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:39.603 09:43:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 444442 00:30:39.603 09:43:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:39.603 09:43:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:39.603 09:43:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:39.603 09:43:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:39.603 09:43:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:39.603 09:43:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.603 09:43:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:39.603 09:43:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.151 09:43:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:42.151 00:30:42.151 real 0m10.439s 00:30:42.151 user 0m5.706s 00:30:42.151 sys 0m5.641s 00:30:42.151 09:43:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:42.151 09:43:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:42.151 ************************************ 00:30:42.151 END TEST nvmf_identify 00:30:42.151 ************************************ 00:30:42.151 09:43:35 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:42.151 09:43:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:42.151 09:43:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:42.151 09:43:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:42.151 ************************************ 00:30:42.151 START TEST nvmf_perf 00:30:42.151 ************************************ 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:42.151 * Looking for test storage... 00:30:42.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:42.151 09:43:35 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:42.152 09:43:35 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:42.152 09:43:35 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:42.152 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:42.152 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.152 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:42.152 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:42.152 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:42.152 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.152 09:43:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:42.152 09:43:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.152 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:42.152 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:42.152 09:43:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:42.152 09:43:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:48.745 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.745 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:48.746 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:48.746 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:48.746 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:48.746 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:49.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:30:49.007 00:30:49.007 --- 10.0.0.2 ping statistics --- 00:30:49.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.007 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:30:49.007 00:30:49.007 --- 10.0.0.1 ping statistics --- 00:30:49.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.007 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=448784 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 448784 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 448784 ']' 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:49.007 09:43:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:49.007 [2024-05-16 09:43:42.442499] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:30:49.007 [2024-05-16 09:43:42.442550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.007 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.007 [2024-05-16 09:43:42.506799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:49.268 [2024-05-16 09:43:42.571689] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.268 [2024-05-16 09:43:42.571724] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.268 [2024-05-16 09:43:42.571731] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:49.268 [2024-05-16 09:43:42.571737] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:49.268 [2024-05-16 09:43:42.571744] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.268 [2024-05-16 09:43:42.571879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.268 [2024-05-16 09:43:42.571997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:49.268 [2024-05-16 09:43:42.572152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.268 [2024-05-16 09:43:42.572153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:49.839 09:43:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:49.839 09:43:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:30:49.840 09:43:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:49.840 09:43:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:49.840 09:43:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:49.840 09:43:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.840 09:43:43 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:49.840 09:43:43 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:50.422 09:43:43 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:50.422 09:43:43 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:50.422 09:43:43 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:30:50.422 09:43:43 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:50.688 09:43:44 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:50.688 09:43:44 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:30:50.688 09:43:44 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:50.688 09:43:44 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:50.688 09:43:44 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:50.688 [2024-05-16 09:43:44.246185] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.949 09:43:44 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:50.949 09:43:44 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:50.949 09:43:44 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:51.211 09:43:44 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:51.211 09:43:44 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:51.472 09:43:44 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.472 [2024-05-16 09:43:44.916483] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:51.472 [2024-05-16 09:43:44.916745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.472 09:43:44 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:51.734 09:43:45 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:30:51.734 09:43:45 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:30:51.734 09:43:45 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:51.734 09:43:45 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:30:53.120 Initializing NVMe Controllers 00:30:53.120 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:30:53.120 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:30:53.120 Initialization complete. Launching workers. 00:30:53.120 ======================================================== 00:30:53.120 Latency(us) 00:30:53.120 Device Information : IOPS MiB/s Average min max 00:30:53.120 PCIE (0000:65:00.0) NSID 1 from core 0: 79433.66 310.29 402.31 13.38 5318.30 00:30:53.120 ======================================================== 00:30:53.120 Total : 79433.66 310.29 402.31 13.38 5318.30 00:30:53.120 00:30:53.120 09:43:46 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:53.120 EAL: No free 2048 kB hugepages reported on node 1 00:30:54.505 Initializing NVMe Controllers 00:30:54.505 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:54.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:54.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:54.505 Initialization complete. Launching workers. 00:30:54.505 ======================================================== 00:30:54.505 Latency(us) 00:30:54.505 Device Information : IOPS MiB/s Average min max 00:30:54.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 94.00 0.37 11111.81 107.20 46583.27 00:30:54.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 53.00 0.21 19255.91 7058.99 47902.18 00:30:54.505 ======================================================== 00:30:54.505 Total : 147.00 0.57 14048.12 107.20 47902.18 00:30:54.505 00:30:54.505 09:43:47 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:54.505 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.891 Initializing NVMe Controllers 00:30:55.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:55.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:55.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:55.891 Initialization complete. Launching workers. 00:30:55.891 ======================================================== 00:30:55.891 Latency(us) 00:30:55.891 Device Information : IOPS MiB/s Average min max 00:30:55.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10547.55 41.20 3033.99 469.53 6686.15 00:30:55.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3763.48 14.70 8547.01 5032.87 17322.27 00:30:55.891 ======================================================== 00:30:55.891 Total : 14311.04 55.90 4483.79 469.53 17322.27 00:30:55.891 00:30:55.891 09:43:49 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:55.891 09:43:49 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:55.891 09:43:49 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:55.891 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.437 Initializing NVMe Controllers 00:30:58.437 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:58.437 Controller IO queue size 128, less than required. 00:30:58.437 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:58.437 Controller IO queue size 128, less than required. 00:30:58.437 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:58.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:58.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:58.437 Initialization complete. Launching workers. 00:30:58.437 ======================================================== 00:30:58.437 Latency(us) 00:30:58.437 Device Information : IOPS MiB/s Average min max 00:30:58.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1627.98 407.00 80142.35 54024.72 129999.36 00:30:58.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 577.99 144.50 230478.12 78817.01 375062.30 00:30:58.437 ======================================================== 00:30:58.437 Total : 2205.98 551.49 119532.23 54024.72 375062.30 00:30:58.437 00:30:58.437 09:43:51 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:58.437 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.438 No valid NVMe controllers or AIO or URING devices found 00:30:58.438 Initializing NVMe Controllers 00:30:58.438 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:58.438 Controller IO queue size 128, less than required. 00:30:58.438 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:58.438 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:58.438 Controller IO queue size 128, less than required. 00:30:58.438 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:58.438 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:58.438 WARNING: Some requested NVMe devices were skipped 00:30:58.438 09:43:51 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:58.438 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.994 Initializing NVMe Controllers 00:31:00.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:00.994 Controller IO queue size 128, less than required. 00:31:00.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:00.994 Controller IO queue size 128, less than required. 00:31:00.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:00.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:00.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:00.994 Initialization complete. Launching workers. 00:31:00.994 00:31:00.994 ==================== 00:31:00.994 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:00.994 TCP transport: 00:31:00.994 polls: 19517 00:31:00.994 idle_polls: 10736 00:31:00.994 sock_completions: 8781 00:31:00.994 nvme_completions: 6531 00:31:00.994 submitted_requests: 9868 00:31:00.994 queued_requests: 1 00:31:00.994 00:31:00.994 ==================== 00:31:00.994 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:00.994 TCP transport: 00:31:00.994 polls: 19947 00:31:00.994 idle_polls: 10553 00:31:00.994 sock_completions: 9394 00:31:00.994 nvme_completions: 6635 00:31:00.994 submitted_requests: 10014 00:31:00.994 queued_requests: 1 00:31:00.994 ======================================================== 00:31:00.994 Latency(us) 00:31:00.994 Device Information : IOPS MiB/s Average min max 00:31:00.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1629.71 407.43 79756.75 40624.06 121619.85 00:31:00.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1655.66 413.92 78419.39 44444.77 143156.60 00:31:00.994 ======================================================== 00:31:00.994 Total : 3285.37 821.34 79082.79 40624.06 143156.60 00:31:00.994 00:31:00.994 09:43:54 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:00.994 09:43:54 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:01.255 rmmod nvme_tcp 00:31:01.255 rmmod nvme_fabrics 00:31:01.255 rmmod nvme_keyring 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 448784 ']' 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 448784 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 448784 ']' 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 448784 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:01.255 09:43:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 448784 00:31:01.517 09:43:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:01.517 09:43:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:01.517 09:43:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 448784' 00:31:01.517 killing process with pid 448784 00:31:01.517 09:43:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 448784 00:31:01.517 [2024-05-16 09:43:54.817654] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:01.517 09:43:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 448784 00:31:03.431 09:43:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:03.431 09:43:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:03.431 09:43:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:03.431 09:43:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:03.431 09:43:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:03.431 09:43:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.431 09:43:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:03.431 09:43:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.346 09:43:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:05.346 00:31:05.346 real 0m23.653s 00:31:05.346 user 0m58.201s 00:31:05.346 sys 0m7.827s 00:31:05.346 09:43:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:05.346 09:43:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:05.346 ************************************ 00:31:05.346 END TEST nvmf_perf 00:31:05.346 ************************************ 00:31:05.608 09:43:58 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:05.608 09:43:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:05.608 09:43:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:05.608 09:43:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:05.608 ************************************ 00:31:05.608 START TEST nvmf_fio_host 00:31:05.608 ************************************ 00:31:05.608 09:43:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:05.608 * Looking for test storage... 00:31:05.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:05.608 09:43:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.608 09:43:59 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.608 09:43:59 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.608 09:43:59 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:05.609 09:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:13.760 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:13.760 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:13.760 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:13.760 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:13.760 09:44:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:13.760 09:44:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:13.760 09:44:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.760 09:44:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:13.760 09:44:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.760 09:44:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:13.760 09:44:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:13.760 09:44:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:13.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:13.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:31:13.760 00:31:13.760 --- 10.0.0.2 ping statistics --- 00:31:13.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.760 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:31:13.760 09:44:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:13.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:13.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:31:13.760 00:31:13.760 --- 10.0.0.1 ping statistics --- 00:31:13.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.760 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:31:13.760 09:44:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:13.760 09:44:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:31:13.760 09:44:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:13.760 09:44:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:13.760 09:44:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:13.761 09:44:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:13.761 09:44:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:13.761 09:44:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:13.761 09:44:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:13.761 09:44:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:31:13.761 09:44:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:31:13.761 09:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:13.761 09:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.761 09:44:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=455840 00:31:13.761 09:44:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:13.761 09:44:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:13.761 09:44:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 455840 00:31:13.761 09:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 455840 ']' 00:31:13.761 09:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.761 09:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:13.761 09:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.761 09:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:13.761 09:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.761 [2024-05-16 09:44:06.260001] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:31:13.761 [2024-05-16 09:44:06.260068] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.761 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.761 [2024-05-16 09:44:06.328941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:13.761 [2024-05-16 09:44:06.402404] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.761 [2024-05-16 09:44:06.402441] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.761 [2024-05-16 09:44:06.402449] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:13.761 [2024-05-16 09:44:06.402455] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:13.761 [2024-05-16 09:44:06.402461] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.761 [2024-05-16 09:44:06.402597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.761 [2024-05-16 09:44:06.402713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:13.761 [2024-05-16 09:44:06.402868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.761 [2024-05-16 09:44:06.402868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.761 [2024-05-16 09:44:07.048515] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.761 Malloc1 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.761 [2024-05-16 09:44:07.141359] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:13.761 [2024-05-16 09:44:07.141604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:13.761 09:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:14.023 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:14.023 fio-3.35 00:31:14.023 Starting 1 thread 00:31:14.023 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.571 00:31:16.571 test: (groupid=0, jobs=1): err= 0: pid=456318: Thu May 16 09:44:09 2024 00:31:16.571 read: IOPS=13.6k, BW=53.2MiB/s (55.8MB/s)(107MiB/2004msec) 00:31:16.571 slat (usec): min=2, max=283, avg= 2.18, stdev= 2.45 00:31:16.571 clat (usec): min=3748, max=8991, avg=5169.58, stdev=686.96 00:31:16.571 lat (usec): min=3750, max=9004, avg=5171.76, stdev=687.10 00:31:16.571 clat percentiles (usec): 00:31:16.571 | 1.00th=[ 4293], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4752], 00:31:16.571 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 5014], 60.00th=[ 5145], 00:31:16.571 | 70.00th=[ 5211], 80.00th=[ 5342], 90.00th=[ 5604], 95.00th=[ 7046], 00:31:16.571 | 99.00th=[ 7767], 99.50th=[ 7963], 99.90th=[ 8356], 99.95th=[ 8586], 00:31:16.571 | 99.99th=[ 8979] 00:31:16.571 bw ( KiB/s): min=47352, max=56952, per=99.93%, avg=54448.00, stdev=4731.85, samples=4 00:31:16.571 iops : min=11838, max=14238, avg=13612.00, stdev=1182.96, samples=4 00:31:16.571 write: IOPS=13.6k, BW=53.2MiB/s (55.7MB/s)(107MiB/2004msec); 0 zone resets 00:31:16.572 slat (usec): min=2, max=263, avg= 2.27, stdev= 1.77 00:31:16.572 clat (usec): min=2889, max=7752, avg=4175.25, stdev=568.39 00:31:16.572 lat (usec): min=2906, max=7754, avg=4177.52, stdev=568.56 00:31:16.572 clat percentiles (usec): 00:31:16.572 | 1.00th=[ 3392], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3851], 00:31:16.572 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4146], 00:31:16.572 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4555], 95.00th=[ 5735], 00:31:16.572 | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 6980], 99.95th=[ 7373], 00:31:16.572 | 99.99th=[ 7635] 00:31:16.572 bw ( KiB/s): min=48016, max=56768, per=100.00%, avg=54436.00, stdev=4282.23, samples=4 00:31:16.572 iops : min=12004, max=14192, avg=13609.00, stdev=1070.56, samples=4 00:31:16.572 lat (msec) : 4=21.02%, 10=78.98% 00:31:16.572 cpu : usr=75.44%, sys=23.32%, ctx=32, majf=0, minf=5 00:31:16.572 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:16.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:16.572 issued rwts: total=27297,27272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:16.572 00:31:16.572 Run status group 0 (all jobs): 00:31:16.572 READ: bw=53.2MiB/s (55.8MB/s), 53.2MiB/s-53.2MiB/s (55.8MB/s-55.8MB/s), io=107MiB (112MB), run=2004-2004msec 00:31:16.572 WRITE: bw=53.2MiB/s (55.7MB/s), 53.2MiB/s-53.2MiB/s (55.7MB/s-55.7MB/s), io=107MiB (112MB), run=2004-2004msec 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:16.572 09:44:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:16.832 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:16.832 fio-3.35 00:31:16.832 Starting 1 thread 00:31:16.832 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.379 00:31:19.379 test: (groupid=0, jobs=1): err= 0: pid=456878: Thu May 16 09:44:12 2024 00:31:19.379 read: IOPS=9111, BW=142MiB/s (149MB/s)(286MiB/2008msec) 00:31:19.379 slat (usec): min=3, max=112, avg= 3.66, stdev= 1.63 00:31:19.379 clat (usec): min=1152, max=53524, avg=8707.83, stdev=3929.99 00:31:19.379 lat (usec): min=1155, max=53528, avg=8711.49, stdev=3930.04 00:31:19.379 clat percentiles (usec): 00:31:19.379 | 1.00th=[ 4228], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6587], 00:31:19.379 | 30.00th=[ 7177], 40.00th=[ 7701], 50.00th=[ 8356], 60.00th=[ 8979], 00:31:19.379 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[10945], 95.00th=[11731], 00:31:19.379 | 99.00th=[15533], 99.50th=[45876], 99.90th=[52691], 99.95th=[53216], 00:31:19.379 | 99.99th=[53216] 00:31:19.379 bw ( KiB/s): min=66528, max=83776, per=49.36%, avg=71952.00, stdev=7966.76, samples=4 00:31:19.379 iops : min= 4158, max= 5236, avg=4497.00, stdev=497.92, samples=4 00:31:19.379 write: IOPS=5401, BW=84.4MiB/s (88.5MB/s)(146MiB/1735msec); 0 zone resets 00:31:19.379 slat (usec): min=40, max=354, avg=41.14, stdev= 7.18 00:31:19.379 clat (usec): min=2050, max=16610, avg=9456.41, stdev=1614.71 00:31:19.379 lat (usec): min=2091, max=16651, avg=9497.55, stdev=1615.89 00:31:19.379 clat percentiles (usec): 00:31:19.379 | 1.00th=[ 6259], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 8160], 00:31:19.379 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9634], 00:31:19.379 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11600], 95.00th=[12387], 00:31:19.379 | 99.00th=[13829], 99.50th=[14091], 99.90th=[16057], 99.95th=[16188], 00:31:19.379 | 99.99th=[16581] 00:31:19.379 bw ( KiB/s): min=69472, max=86624, per=86.41%, avg=74672.00, stdev=8040.83, samples=4 00:31:19.379 iops : min= 4342, max= 5414, avg=4667.00, stdev=502.55, samples=4 00:31:19.379 lat (msec) : 2=0.03%, 4=0.51%, 10=70.66%, 20=28.34%, 50=0.24% 00:31:19.379 lat (msec) : 100=0.22% 00:31:19.379 cpu : usr=84.30%, sys=14.10%, ctx=16, majf=0, minf=8 00:31:19.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:31:19.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:19.379 issued rwts: total=18295,9371,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:19.379 00:31:19.379 Run status group 0 (all jobs): 00:31:19.379 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=286MiB (300MB), run=2008-2008msec 00:31:19.379 WRITE: bw=84.4MiB/s (88.5MB/s), 84.4MiB/s-84.4MiB/s (88.5MB/s-88.5MB/s), io=146MiB (154MB), run=1735-1735msec 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:19.379 rmmod nvme_tcp 00:31:19.379 rmmod nvme_fabrics 00:31:19.379 rmmod nvme_keyring 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 455840 ']' 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 455840 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 455840 ']' 00:31:19.379 09:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 455840 00:31:19.380 09:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:31:19.380 09:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:19.380 09:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 455840 00:31:19.380 09:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:19.380 09:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:19.380 09:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 455840' 00:31:19.380 killing process with pid 455840 00:31:19.380 09:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 455840 00:31:19.380 [2024-05-16 09:44:12.740492] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:19.380 09:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 455840 00:31:19.380 09:44:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:19.380 09:44:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:19.380 09:44:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:19.380 09:44:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:19.380 09:44:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:19.380 09:44:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.380 09:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:19.380 09:44:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.921 09:44:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:21.921 00:31:21.921 real 0m16.001s 00:31:21.921 user 0m56.561s 00:31:21.921 sys 0m6.921s 00:31:21.921 09:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:21.921 09:44:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.921 ************************************ 00:31:21.921 END TEST nvmf_fio_host 00:31:21.921 ************************************ 00:31:21.921 09:44:14 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:21.921 09:44:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:21.921 09:44:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:21.921 09:44:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:21.921 ************************************ 00:31:21.921 START TEST nvmf_failover 00:31:21.921 ************************************ 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:21.921 * Looking for test storage... 00:31:21.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:21.921 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.922 09:44:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:21.922 09:44:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.922 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:21.922 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:21.922 09:44:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:31:21.922 09:44:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.507 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:28.508 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:28.508 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:28.508 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:28.508 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:28.508 09:44:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:28.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:31:28.769 00:31:28.769 --- 10.0.0.2 ping statistics --- 00:31:28.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.769 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:31:28.769 00:31:28.769 --- 10.0.0.1 ping statistics --- 00:31:28.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.769 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=461517 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 461517 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 461517 ']' 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:28.769 09:44:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:28.769 [2024-05-16 09:44:22.219179] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:31:28.769 [2024-05-16 09:44:22.219244] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.769 EAL: No free 2048 kB hugepages reported on node 1 00:31:28.769 [2024-05-16 09:44:22.305423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:29.028 [2024-05-16 09:44:22.398222] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:29.028 [2024-05-16 09:44:22.398275] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:29.028 [2024-05-16 09:44:22.398283] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:29.028 [2024-05-16 09:44:22.398290] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:29.028 [2024-05-16 09:44:22.398296] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:29.028 [2024-05-16 09:44:22.398437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:29.028 [2024-05-16 09:44:22.398601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:29.028 [2024-05-16 09:44:22.398602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:29.598 09:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:29.598 09:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:31:29.598 09:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:29.598 09:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:29.598 09:44:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:29.598 09:44:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:29.598 09:44:23 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:29.859 [2024-05-16 09:44:23.184638] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:29.859 09:44:23 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:29.859 Malloc0 00:31:29.859 09:44:23 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:30.120 09:44:23 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:30.391 09:44:23 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.392 [2024-05-16 09:44:23.899921] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:30.392 [2024-05-16 09:44:23.900161] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.392 09:44:23 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:30.655 [2024-05-16 09:44:24.064531] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:30.655 09:44:24 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:30.915 [2024-05-16 09:44:24.225009] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:30.915 09:44:24 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:31:30.915 09:44:24 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=461885 00:31:30.915 09:44:24 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:30.915 09:44:24 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 461885 /var/tmp/bdevperf.sock 00:31:30.915 09:44:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 461885 ']' 00:31:30.915 09:44:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:30.915 09:44:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:30.915 09:44:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:30.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:30.915 09:44:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:30.915 09:44:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:31.859 09:44:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:31.859 09:44:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:31:31.859 09:44:25 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:31.859 NVMe0n1 00:31:31.859 09:44:25 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:32.120 00:31:32.120 09:44:25 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:32.120 09:44:25 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=462218 00:31:32.120 09:44:25 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:31:33.061 09:44:26 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:33.321 09:44:26 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:31:36.219 09:44:29 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:36.480 00:31:36.741 09:44:30 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:36.741 [2024-05-16 09:44:30.185513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 [2024-05-16 09:44:30.185661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab25e0 is same with the state(5) to be set 00:31:36.741 09:44:30 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:31:40.042 09:44:33 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:40.042 [2024-05-16 09:44:33.360852] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.042 09:44:33 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:31:40.983 09:44:34 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:41.243 09:44:34 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 462218 00:31:47.834 0 00:31:47.834 09:44:40 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 461885 00:31:47.834 09:44:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 461885 ']' 00:31:47.834 09:44:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 461885 00:31:47.834 09:44:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:47.834 09:44:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:47.834 09:44:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 461885 00:31:47.834 09:44:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:47.834 09:44:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:47.834 09:44:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 461885' 00:31:47.834 killing process with pid 461885 00:31:47.834 09:44:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 461885 00:31:47.834 09:44:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 461885 00:31:47.834 09:44:40 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:47.834 [2024-05-16 09:44:24.297182] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:31:47.834 [2024-05-16 09:44:24.297278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid461885 ] 00:31:47.834 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.834 [2024-05-16 09:44:24.358132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.834 [2024-05-16 09:44:24.422457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.834 Running I/O for 15 seconds... 00:31:47.834 [2024-05-16 09:44:26.741532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.834 [2024-05-16 09:44:26.741579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.834 [2024-05-16 09:44:26.741605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.834 [2024-05-16 09:44:26.741622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.834 [2024-05-16 09:44:26.741639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.834 [2024-05-16 09:44:26.741655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.834 [2024-05-16 09:44:26.741671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.834 [2024-05-16 09:44:26.741688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.834 [2024-05-16 09:44:26.741704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.834 [2024-05-16 09:44:26.741721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.834 [2024-05-16 09:44:26.741737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.834 [2024-05-16 09:44:26.741754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.834 [2024-05-16 09:44:26.741779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.834 [2024-05-16 09:44:26.741796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.834 [2024-05-16 09:44:26.741813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.834 [2024-05-16 09:44:26.741831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.834 [2024-05-16 09:44:26.741848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.834 [2024-05-16 09:44:26.741864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.834 [2024-05-16 09:44:26.741880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.834 [2024-05-16 09:44:26.741897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.834 [2024-05-16 09:44:26.741918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.834 [2024-05-16 09:44:26.741929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.835 [2024-05-16 09:44:26.741936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.741946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.835 [2024-05-16 09:44:26.741955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.741965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.835 [2024-05-16 09:44:26.741973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.741982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.835 [2024-05-16 09:44:26.741992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.835 [2024-05-16 09:44:26.742009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.835 [2024-05-16 09:44:26.742026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.835 [2024-05-16 09:44:26.742042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.835 [2024-05-16 09:44:26.742063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.835 [2024-05-16 09:44:26.742079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.835 [2024-05-16 09:44:26.742095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.835 [2024-05-16 09:44:26.742377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.835 [2024-05-16 09:44:26.742603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.835 [2024-05-16 09:44:26.742610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.836 [2024-05-16 09:44:26.742978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.742989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.742997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.743006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.743013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.743022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.743030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.743040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.743049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.743062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.743069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.743078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.743084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.743095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.743103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.743112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.743119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.743128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.743135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.743145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.743152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.743162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.743169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.743178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.743185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.743195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.743202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.743212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.743219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.743228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.743235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.743245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.743252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.743263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.743270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.743279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.743286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.836 [2024-05-16 09:44:26.743296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.836 [2024-05-16 09:44:26.743303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.837 [2024-05-16 09:44:26.743517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.837 [2024-05-16 09:44:26.743533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.837 [2024-05-16 09:44:26.743550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.837 [2024-05-16 09:44:26.743566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.837 [2024-05-16 09:44:26.743582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.837 [2024-05-16 09:44:26.743599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:26.743717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.837 [2024-05-16 09:44:26.743742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.837 [2024-05-16 09:44:26.743749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97744 len:8 PRP1 0x0 PRP2 0x0 00:31:47.837 [2024-05-16 09:44:26.743757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743794] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17d9f50 was disconnected and freed. reset controller. 00:31:47.837 [2024-05-16 09:44:26.743809] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:47.837 [2024-05-16 09:44:26.743828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.837 [2024-05-16 09:44:26.743836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.837 [2024-05-16 09:44:26.743851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.837 [2024-05-16 09:44:26.743867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.837 [2024-05-16 09:44:26.743882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:26.743889] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.837 [2024-05-16 09:44:26.747483] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.837 [2024-05-16 09:44:26.747510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bb0d0 (9): Bad file descriptor 00:31:47.837 [2024-05-16 09:44:26.781392] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:47.837 [2024-05-16 09:44:30.188281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:30.188321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:30.188337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:30.188345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:30.188361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:30.188368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:30.188378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:30.188385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:30.188394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:30.188401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:30.188410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:30.188418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:30.188427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.837 [2024-05-16 09:44:30.188435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.837 [2024-05-16 09:44:30.188444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:28664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.838 [2024-05-16 09:44:30.188766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.838 [2024-05-16 09:44:30.188783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:28936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.838 [2024-05-16 09:44:30.188800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.838 [2024-05-16 09:44:30.188816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.838 [2024-05-16 09:44:30.188832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.838 [2024-05-16 09:44:30.188847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.838 [2024-05-16 09:44:30.188864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.838 [2024-05-16 09:44:30.188882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:28984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.838 [2024-05-16 09:44:30.188899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:28992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.838 [2024-05-16 09:44:30.188915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.838 [2024-05-16 09:44:30.188932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.838 [2024-05-16 09:44:30.188949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.838 [2024-05-16 09:44:30.188965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.838 [2024-05-16 09:44:30.188981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.838 [2024-05-16 09:44:30.188990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.188998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.839 [2024-05-16 09:44:30.189109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.839 [2024-05-16 09:44:30.189133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.839 [2024-05-16 09:44:30.189156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.839 [2024-05-16 09:44:30.189179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.839 [2024-05-16 09:44:30.189204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.839 [2024-05-16 09:44:30.189231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.839 [2024-05-16 09:44:30.189260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.189959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.839 [2024-05-16 09:44:30.189973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.839 [2024-05-16 09:44:30.190004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.839 [2024-05-16 09:44:30.190015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29296 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29304 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29312 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29320 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29328 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29336 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29344 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29352 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29360 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29368 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29376 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29384 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29392 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29400 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29408 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29416 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29424 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29432 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29448 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29456 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29464 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29472 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29480 len:8 PRP1 0x0 PRP2 0x0 00:31:47.840 [2024-05-16 09:44:30.190721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.840 [2024-05-16 09:44:30.190729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.840 [2024-05-16 09:44:30.190734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.840 [2024-05-16 09:44:30.190740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29488 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.190747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.190754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.190760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.190767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29496 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.190774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.190782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.190787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.190793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29504 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.190800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.190809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.190815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.190821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29512 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.190828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.190836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.190841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.190847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29520 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.190854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.190861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.190867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.190873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29528 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.190881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.190888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.190894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.190900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29536 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.190907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.190915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.190921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.190927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29544 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.190934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.190941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.190946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.190955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29552 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.190962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.190970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.190975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.190981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29560 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.190989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.190996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.191002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.191009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29568 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.191016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.191024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.191029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.191035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29576 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.191042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.191049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.191059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.191065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29584 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.191071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.191079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.191085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.191091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29592 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.191098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.191107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.191112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.191118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29600 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.191125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.191133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.191138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.191144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29608 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.191151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.191158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.191163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.191169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29616 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.191176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.191183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.191189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.191195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29624 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.191202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.201595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.201622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.201632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28872 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.201641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.201648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.201655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.201661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28880 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.201668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.201675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.201680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.201687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28888 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.201694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.201701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.201707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.201712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28896 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.201724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.201732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.201737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.201743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28904 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.201750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.201757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.201763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.201769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28912 len:8 PRP1 0x0 PRP2 0x0 00:31:47.841 [2024-05-16 09:44:30.201776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.841 [2024-05-16 09:44:30.201783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.841 [2024-05-16 09:44:30.201788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.841 [2024-05-16 09:44:30.201796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28920 len:8 PRP1 0x0 PRP2 0x0 00:31:47.842 [2024-05-16 09:44:30.201804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:30.201843] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17e77b0 was disconnected and freed. reset controller. 00:31:47.842 [2024-05-16 09:44:30.201854] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:47.842 [2024-05-16 09:44:30.201879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.842 [2024-05-16 09:44:30.201889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:30.201898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.842 [2024-05-16 09:44:30.201906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:30.201915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.842 [2024-05-16 09:44:30.201923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:30.201930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.842 [2024-05-16 09:44:30.201938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:30.201946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.842 [2024-05-16 09:44:30.201985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bb0d0 (9): Bad file descriptor 00:31:47.842 [2024-05-16 09:44:30.205583] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.842 [2024-05-16 09:44:30.247803] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:47.842 [2024-05-16 09:44:34.540138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.842 [2024-05-16 09:44:34.540193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.842 [2024-05-16 09:44:34.540218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.842 [2024-05-16 09:44:34.540233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.842 [2024-05-16 09:44:34.540248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bb0d0 is same with the state(5) to be set 00:31:47.842 [2024-05-16 09:44:34.540306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:47.842 [2024-05-16 09:44:34.540316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.842 [2024-05-16 09:44:34.540802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.842 [2024-05-16 09:44:34.540810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.540820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.540827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.540836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.540843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.540853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.540860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.540869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.540876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.540885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.540892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.540902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.540910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.540919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.540926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.540936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.540943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.540952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.540960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.540969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.540976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.540985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.540993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.843 [2024-05-16 09:44:34.541377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.843 [2024-05-16 09:44:34.541384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.844 [2024-05-16 09:44:34.541926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.844 [2024-05-16 09:44:34.541933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.541943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.541950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.541959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.541967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.541976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.541984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.541993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:47.845 [2024-05-16 09:44:34.542449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:47.845 [2024-05-16 09:44:34.542474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:47.845 [2024-05-16 09:44:34.542480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44864 len:8 PRP1 0x0 PRP2 0x0 00:31:47.845 [2024-05-16 09:44:34.542487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.845 [2024-05-16 09:44:34.542524] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17de120 was disconnected and freed. reset controller. 00:31:47.845 [2024-05-16 09:44:34.542534] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:47.845 [2024-05-16 09:44:34.542542] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:47.845 [2024-05-16 09:44:34.546121] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:47.845 [2024-05-16 09:44:34.546146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bb0d0 (9): Bad file descriptor 00:31:47.845 [2024-05-16 09:44:34.758554] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:47.845 00:31:47.845 Latency(us) 00:31:47.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.845 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:47.845 Verification LBA range: start 0x0 length 0x4000 00:31:47.845 NVMe0n1 : 15.01 11387.50 44.48 666.99 0.00 10590.34 532.48 20753.07 00:31:47.845 =================================================================================================================== 00:31:47.845 Total : 11387.50 44.48 666.99 0.00 10590.34 532.48 20753.07 00:31:47.845 Received shutdown signal, test time was about 15.000000 seconds 00:31:47.845 00:31:47.845 Latency(us) 00:31:47.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.845 =================================================================================================================== 00:31:47.846 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:47.846 09:44:40 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:47.846 09:44:40 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:47.846 09:44:40 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:47.846 09:44:40 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=465229 00:31:47.846 09:44:40 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 465229 /var/tmp/bdevperf.sock 00:31:47.846 09:44:40 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:47.846 09:44:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 465229 ']' 00:31:47.846 09:44:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:47.846 09:44:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:47.846 09:44:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:47.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:47.846 09:44:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:47.846 09:44:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:48.418 09:44:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:48.418 09:44:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:31:48.418 09:44:41 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:48.418 [2024-05-16 09:44:41.897225] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:48.418 09:44:41 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:48.678 [2024-05-16 09:44:42.061632] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:48.678 09:44:42 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:48.939 NVMe0n1 00:31:48.939 09:44:42 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:49.199 00:31:49.199 09:44:42 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:49.459 00:31:49.459 09:44:42 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:49.459 09:44:42 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:49.459 09:44:42 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:49.720 09:44:43 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:53.030 09:44:46 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:53.030 09:44:46 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:53.030 09:44:46 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=466249 00:31:53.030 09:44:46 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:53.030 09:44:46 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 466249 00:31:53.974 0 00:31:53.974 09:44:47 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:53.974 [2024-05-16 09:44:40.971258] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:31:53.974 [2024-05-16 09:44:40.971315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465229 ] 00:31:53.974 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.974 [2024-05-16 09:44:41.029358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.974 [2024-05-16 09:44:41.093343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.974 [2024-05-16 09:44:43.128617] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:53.974 [2024-05-16 09:44:43.128662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.974 [2024-05-16 09:44:43.128673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.974 [2024-05-16 09:44:43.128682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.974 [2024-05-16 09:44:43.128690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.974 [2024-05-16 09:44:43.128697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.974 [2024-05-16 09:44:43.128704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.974 [2024-05-16 09:44:43.128712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:53.974 [2024-05-16 09:44:43.128719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:53.974 [2024-05-16 09:44:43.128726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:53.974 [2024-05-16 09:44:43.128750] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:53.974 [2024-05-16 09:44:43.128763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19770d0 (9): Bad file descriptor 00:31:53.974 [2024-05-16 09:44:43.143471] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:53.974 Running I/O for 1 seconds... 00:31:53.974 00:31:53.974 Latency(us) 00:31:53.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.974 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:53.974 Verification LBA range: start 0x0 length 0x4000 00:31:53.974 NVMe0n1 : 1.01 11331.83 44.26 0.00 0.00 11240.19 2348.37 9994.24 00:31:53.974 =================================================================================================================== 00:31:53.974 Total : 11331.83 44.26 0.00 0.00 11240.19 2348.37 9994.24 00:31:53.974 09:44:47 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:53.974 09:44:47 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:54.236 09:44:47 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:54.236 09:44:47 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:54.236 09:44:47 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:54.497 09:44:47 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:54.757 09:44:48 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:58.060 09:44:51 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:58.060 09:44:51 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:58.060 09:44:51 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 465229 00:31:58.060 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 465229 ']' 00:31:58.060 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 465229 00:31:58.060 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:58.060 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:58.060 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 465229 00:31:58.060 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:58.060 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:58.060 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 465229' 00:31:58.060 killing process with pid 465229 00:31:58.060 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 465229 00:31:58.060 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 465229 00:31:58.060 09:44:51 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:58.060 09:44:51 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:58.322 09:44:51 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:58.322 09:44:51 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:58.322 09:44:51 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:58.322 09:44:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:58.323 rmmod nvme_tcp 00:31:58.323 rmmod nvme_fabrics 00:31:58.323 rmmod nvme_keyring 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 461517 ']' 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 461517 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 461517 ']' 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 461517 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 461517 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 461517' 00:31:58.323 killing process with pid 461517 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 461517 00:31:58.323 [2024-05-16 09:44:51.769059] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:58.323 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 461517 00:31:58.585 09:44:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:58.585 09:44:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:58.585 09:44:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:58.585 09:44:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:58.585 09:44:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:58.585 09:44:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.585 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:58.585 09:44:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.503 09:44:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:00.503 00:32:00.503 real 0m38.922s 00:32:00.503 user 2m0.687s 00:32:00.503 sys 0m7.831s 00:32:00.503 09:44:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:00.503 09:44:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:00.503 ************************************ 00:32:00.503 END TEST nvmf_failover 00:32:00.503 ************************************ 00:32:00.503 09:44:54 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:00.503 09:44:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:00.503 09:44:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:00.503 09:44:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:00.503 ************************************ 00:32:00.503 START TEST nvmf_host_discovery 00:32:00.503 ************************************ 00:32:00.503 09:44:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:00.764 * Looking for test storage... 00:32:00.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:32:00.764 09:44:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:07.560 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:07.560 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:07.560 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:07.561 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:07.561 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:07.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:32:07.561 00:32:07.561 --- 10.0.0.2 ping statistics --- 00:32:07.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.561 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:32:07.561 00:32:07.561 --- 10.0.0.1 ping statistics --- 00:32:07.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.561 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:07.561 09:45:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:07.561 09:45:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:07.561 09:45:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:07.561 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:07.561 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:07.561 09:45:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=471328 00:32:07.561 09:45:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 471328 00:32:07.561 09:45:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:07.561 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 471328 ']' 00:32:07.561 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.561 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:07.561 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.561 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:07.561 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:07.561 [2024-05-16 09:45:01.092357] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:32:07.561 [2024-05-16 09:45:01.092423] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.838 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.838 [2024-05-16 09:45:01.181289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.838 [2024-05-16 09:45:01.275914] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:07.838 [2024-05-16 09:45:01.275966] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:07.838 [2024-05-16 09:45:01.275975] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:07.838 [2024-05-16 09:45:01.275982] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:07.838 [2024-05-16 09:45:01.275988] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:07.838 [2024-05-16 09:45:01.276012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.432 [2024-05-16 09:45:01.934482] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.432 [2024-05-16 09:45:01.946458] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:08.432 [2024-05-16 09:45:01.946735] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.432 null0 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.432 null1 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.432 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.708 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.708 09:45:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=471719 00:32:08.708 09:45:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 471719 /tmp/host.sock 00:32:08.708 09:45:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:08.708 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 471719 ']' 00:32:08.708 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:32:08.708 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:08.708 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:08.708 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:08.708 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:08.708 09:45:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:08.708 [2024-05-16 09:45:02.040665] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:32:08.708 [2024-05-16 09:45:02.040726] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid471719 ] 00:32:08.708 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.708 [2024-05-16 09:45:02.104674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.708 [2024-05-16 09:45:02.178591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.290 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:09.555 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.556 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:09.556 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.556 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:09.556 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:09.556 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.556 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:09.556 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:09.556 09:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:09.556 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.556 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.556 09:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:09.556 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.826 [2024-05-16 09:45:03.149770] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:32:09.826 09:45:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:32:10.443 [2024-05-16 09:45:03.872212] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:10.443 [2024-05-16 09:45:03.872234] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:10.443 [2024-05-16 09:45:03.872250] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:10.443 [2024-05-16 09:45:03.960544] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:10.720 [2024-05-16 09:45:04.064955] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:10.720 [2024-05-16 09:45:04.064976] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.000 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.001 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.302 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:11.302 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:11.302 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:32:11.302 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:11.302 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:11.302 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.302 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.302 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.302 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:11.302 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:11.302 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:11.302 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:11.302 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:11.302 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:32:11.302 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:11.302 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:11.303 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.303 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.303 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.303 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.576 [2024-05-16 09:45:04.914515] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:11.576 [2024-05-16 09:45:04.915441] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:11.576 [2024-05-16 09:45:04.915466] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.576 09:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:11.576 09:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.576 09:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:11.576 09:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:11.576 09:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:11.576 09:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:11.577 09:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:11.577 09:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:11.577 09:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:11.577 09:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:32:11.577 09:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:11.577 09:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:11.577 09:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.577 09:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:11.577 09:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:11.577 09:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:11.577 09:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.577 [2024-05-16 09:45:05.044281] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:11.577 09:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:11.577 09:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:32:11.577 [2024-05-16 09:45:05.106866] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:11.577 [2024-05-16 09:45:05.106882] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:11.577 [2024-05-16 09:45:05.106892] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.588 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.866 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.866 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:12.866 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:12.866 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:32:12.866 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:12.866 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:12.866 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.866 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.866 [2024-05-16 09:45:06.177999] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:12.866 [2024-05-16 09:45:06.178023] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:12.866 [2024-05-16 09:45:06.180415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:12.866 [2024-05-16 09:45:06.180434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.866 [2024-05-16 09:45:06.180447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:12.866 [2024-05-16 09:45:06.180455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.866 [2024-05-16 09:45:06.180463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:12.866 [2024-05-16 09:45:06.180470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.866 [2024-05-16 09:45:06.180478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:12.866 [2024-05-16 09:45:06.180485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:12.866 [2024-05-16 09:45:06.180492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ff6d0 is same with the state(5) to be set 00:32:12.866 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.866 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:12.867 [2024-05-16 09:45:06.190429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ff6d0 (9): Bad file descriptor 00:32:12.867 [2024-05-16 09:45:06.200468] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:12.867 [2024-05-16 09:45:06.200832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.867 [2024-05-16 09:45:06.201082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.867 [2024-05-16 09:45:06.201094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ff6d0 with addr=10.0.0.2, port=4420 00:32:12.867 [2024-05-16 09:45:06.201103] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ff6d0 is same with the state(5) to be set 00:32:12.867 [2024-05-16 09:45:06.201116] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ff6d0 (9): Bad file descriptor 00:32:12.867 [2024-05-16 09:45:06.201134] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.867 [2024-05-16 09:45:06.201141] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.867 [2024-05-16 09:45:06.201149] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.867 [2024-05-16 09:45:06.201162] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.867 [2024-05-16 09:45:06.210524] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:12.867 [2024-05-16 09:45:06.210871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.867 [2024-05-16 09:45:06.211309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.867 [2024-05-16 09:45:06.211348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ff6d0 with addr=10.0.0.2, port=4420 00:32:12.867 [2024-05-16 09:45:06.211359] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ff6d0 is same with the state(5) to be set 00:32:12.867 [2024-05-16 09:45:06.211377] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ff6d0 (9): Bad file descriptor 00:32:12.867 [2024-05-16 09:45:06.211414] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.867 [2024-05-16 09:45:06.211423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.867 [2024-05-16 09:45:06.211432] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.867 [2024-05-16 09:45:06.211447] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.867 [2024-05-16 09:45:06.220578] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:12.867 [2024-05-16 09:45:06.220894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.867 [2024-05-16 09:45:06.221317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.867 [2024-05-16 09:45:06.221356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ff6d0 with addr=10.0.0.2, port=4420 00:32:12.867 [2024-05-16 09:45:06.221368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ff6d0 is same with the state(5) to be set 00:32:12.867 [2024-05-16 09:45:06.221388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ff6d0 (9): Bad file descriptor 00:32:12.867 [2024-05-16 09:45:06.221400] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.867 [2024-05-16 09:45:06.221407] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.867 [2024-05-16 09:45:06.221415] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.867 [2024-05-16 09:45:06.221429] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.867 [2024-05-16 09:45:06.230634] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:12.867 [2024-05-16 09:45:06.231004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.867 [2024-05-16 09:45:06.231400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.867 [2024-05-16 09:45:06.231438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ff6d0 with addr=10.0.0.2, port=4420 00:32:12.867 [2024-05-16 09:45:06.231449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ff6d0 is same with the state(5) to be set 00:32:12.867 [2024-05-16 09:45:06.231468] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ff6d0 (9): Bad file descriptor 00:32:12.867 [2024-05-16 09:45:06.231496] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.867 [2024-05-16 09:45:06.231504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.867 [2024-05-16 09:45:06.231511] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.867 [2024-05-16 09:45:06.231537] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:32:12.867 [2024-05-16 09:45:06.240693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:12.867 [2024-05-16 09:45:06.241719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.867 [2024-05-16 09:45:06.242068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.867 [2024-05-16 09:45:06.242082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ff6d0 with addr=10.0.0.2, port=4420 00:32:12.867 [2024-05-16 09:45:06.242091] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ff6d0 is same with the state(5) to be set 00:32:12.867 [2024-05-16 09:45:06.242106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ff6d0 (9): Bad file descriptor 00:32:12.867 [2024-05-16 09:45:06.242126] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.867 [2024-05-16 09:45:06.242134] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.867 [2024-05-16 09:45:06.242141] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.867 [2024-05-16 09:45:06.242154] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:12.867 [2024-05-16 09:45:06.250747] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:12.867 [2024-05-16 09:45:06.251102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.867 [2024-05-16 09:45:06.251422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.867 [2024-05-16 09:45:06.251433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ff6d0 with addr=10.0.0.2, port=4420 00:32:12.867 [2024-05-16 09:45:06.251440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ff6d0 is same with the state(5) to be set 00:32:12.867 [2024-05-16 09:45:06.251452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ff6d0 (9): Bad file descriptor 00:32:12.867 [2024-05-16 09:45:06.251470] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.867 [2024-05-16 09:45:06.251477] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.867 [2024-05-16 09:45:06.251484] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.867 [2024-05-16 09:45:06.251495] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.867 [2024-05-16 09:45:06.260804] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.867 [2024-05-16 09:45:06.261153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.867 [2024-05-16 09:45:06.261466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.867 [2024-05-16 09:45:06.261477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22ff6d0 with addr=10.0.0.2, port=4420 00:32:12.867 [2024-05-16 09:45:06.261484] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ff6d0 is same with the state(5) to be set 00:32:12.867 [2024-05-16 09:45:06.261495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22ff6d0 (9): Bad file descriptor 00:32:12.867 [2024-05-16 09:45:06.261511] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.867 [2024-05-16 09:45:06.261518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:12.867 [2024-05-16 09:45:06.261525] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.867 [2024-05-16 09:45:06.261536] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:12.867 [2024-05-16 09:45:06.265533] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:12.867 [2024-05-16 09:45:06.265551] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:12.867 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.148 09:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.193 [2024-05-16 09:45:07.624241] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:14.193 [2024-05-16 09:45:07.624259] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:14.193 [2024-05-16 09:45:07.624273] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:14.193 [2024-05-16 09:45:07.712546] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:14.767 [2024-05-16 09:45:08.022110] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:14.767 [2024-05-16 09:45:08.022141] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.767 request: 00:32:14.767 { 00:32:14.767 "name": "nvme", 00:32:14.767 "trtype": "tcp", 00:32:14.767 "traddr": "10.0.0.2", 00:32:14.767 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:14.767 "adrfam": "ipv4", 00:32:14.767 "trsvcid": "8009", 00:32:14.767 "wait_for_attach": true, 00:32:14.767 "method": "bdev_nvme_start_discovery", 00:32:14.767 "req_id": 1 00:32:14.767 } 00:32:14.767 Got JSON-RPC error response 00:32:14.767 response: 00:32:14.767 { 00:32:14.767 "code": -17, 00:32:14.767 "message": "File exists" 00:32:14.767 } 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.767 request: 00:32:14.767 { 00:32:14.767 "name": "nvme_second", 00:32:14.767 "trtype": "tcp", 00:32:14.767 "traddr": "10.0.0.2", 00:32:14.767 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:14.767 "adrfam": "ipv4", 00:32:14.767 "trsvcid": "8009", 00:32:14.767 "wait_for_attach": true, 00:32:14.767 "method": "bdev_nvme_start_discovery", 00:32:14.767 "req_id": 1 00:32:14.767 } 00:32:14.767 Got JSON-RPC error response 00:32:14.767 response: 00:32:14.767 { 00:32:14.767 "code": -17, 00:32:14.767 "message": "File exists" 00:32:14.767 } 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.767 09:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:16.151 [2024-05-16 09:45:09.273571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.151 [2024-05-16 09:45:09.273792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.151 [2024-05-16 09:45:09.273804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22fb740 with addr=10.0.0.2, port=8010 00:32:16.151 [2024-05-16 09:45:09.273823] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:16.151 [2024-05-16 09:45:09.273831] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:16.151 [2024-05-16 09:45:09.273839] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:16.722 [2024-05-16 09:45:10.275925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.722 [2024-05-16 09:45:10.276281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:16.722 [2024-05-16 09:45:10.276295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22fb740 with addr=10.0.0.2, port=8010 00:32:16.722 [2024-05-16 09:45:10.276309] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:16.722 [2024-05-16 09:45:10.276318] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:16.722 [2024-05-16 09:45:10.276325] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:18.109 [2024-05-16 09:45:11.277925] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:18.109 request: 00:32:18.109 { 00:32:18.109 "name": "nvme_second", 00:32:18.109 "trtype": "tcp", 00:32:18.109 "traddr": "10.0.0.2", 00:32:18.109 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:18.109 "adrfam": "ipv4", 00:32:18.109 "trsvcid": "8010", 00:32:18.109 "attach_timeout_ms": 3000, 00:32:18.109 "method": "bdev_nvme_start_discovery", 00:32:18.109 "req_id": 1 00:32:18.109 } 00:32:18.109 Got JSON-RPC error response 00:32:18.109 response: 00:32:18.109 { 00:32:18.109 "code": -110, 00:32:18.109 "message": "Connection timed out" 00:32:18.109 } 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 471719 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:18.109 rmmod nvme_tcp 00:32:18.109 rmmod nvme_fabrics 00:32:18.109 rmmod nvme_keyring 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 471328 ']' 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 471328 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 471328 ']' 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 471328 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 471328 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 471328' 00:32:18.109 killing process with pid 471328 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 471328 00:32:18.109 [2024-05-16 09:45:11.458269] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 471328 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:18.109 09:45:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:20.655 00:32:20.655 real 0m19.585s 00:32:20.655 user 0m23.384s 00:32:20.655 sys 0m6.497s 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:20.655 ************************************ 00:32:20.655 END TEST nvmf_host_discovery 00:32:20.655 ************************************ 00:32:20.655 09:45:13 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:20.655 09:45:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:20.655 09:45:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:20.655 09:45:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:20.655 ************************************ 00:32:20.655 START TEST nvmf_host_multipath_status 00:32:20.655 ************************************ 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:20.655 * Looking for test storage... 00:32:20.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.655 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:32:20.656 09:45:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:27.240 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:27.240 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:27.240 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:27.240 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:27.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:27.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.721 ms 00:32:27.240 00:32:27.240 --- 10.0.0.2 ping statistics --- 00:32:27.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.240 rtt min/avg/max/mdev = 0.721/0.721/0.721/0.000 ms 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:27.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:27.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:32:27.240 00:32:27.240 --- 10.0.0.1 ping statistics --- 00:32:27.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.240 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:32:27.240 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=478192 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 478192 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 478192 ']' 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:27.500 09:45:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:27.500 [2024-05-16 09:45:20.889474] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:32:27.500 [2024-05-16 09:45:20.889536] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:27.500 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.500 [2024-05-16 09:45:20.960719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:27.500 [2024-05-16 09:45:21.035366] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.500 [2024-05-16 09:45:21.035403] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.500 [2024-05-16 09:45:21.035411] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:27.500 [2024-05-16 09:45:21.035418] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:27.500 [2024-05-16 09:45:21.035424] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.500 [2024-05-16 09:45:21.035557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.500 [2024-05-16 09:45:21.035559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.437 09:45:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:28.437 09:45:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:32:28.437 09:45:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:28.437 09:45:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:28.437 09:45:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:28.437 09:45:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:28.437 09:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=478192 00:32:28.437 09:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:28.437 [2024-05-16 09:45:21.836092] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.437 09:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:28.696 Malloc0 00:32:28.696 09:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:28.696 09:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:28.955 09:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:28.955 [2024-05-16 09:45:22.468632] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:28.955 [2024-05-16 09:45:22.468871] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:28.955 09:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:29.216 [2024-05-16 09:45:22.625188] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:29.216 09:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=478574 00:32:29.216 09:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:29.216 09:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:29.216 09:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 478574 /var/tmp/bdevperf.sock 00:32:29.216 09:45:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 478574 ']' 00:32:29.216 09:45:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:29.216 09:45:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:29.216 09:45:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:29.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:29.216 09:45:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:29.216 09:45:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:30.156 09:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:30.156 09:45:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:32:30.156 09:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:30.156 09:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:32:30.725 Nvme0n1 00:32:30.725 09:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:30.985 Nvme0n1 00:32:30.985 09:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:30.985 09:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:33.526 09:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:33.526 09:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:33.526 09:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:33.526 09:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:34.465 09:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:34.465 09:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:34.465 09:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.465 09:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:34.726 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:34.726 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:34.726 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.726 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:34.726 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:34.726 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:34.726 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.726 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:34.986 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:34.986 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:34.986 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:34.986 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:35.247 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:35.247 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:35.247 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.247 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:35.247 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:35.247 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:35.247 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:35.247 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:35.508 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:35.508 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:35.508 09:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:35.508 09:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:35.768 09:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:36.709 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:36.709 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:36.709 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.709 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:36.969 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:36.969 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:36.969 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:36.969 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:37.230 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.230 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:37.230 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.230 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:37.230 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.230 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:37.230 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.230 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:37.490 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.490 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:37.490 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.490 09:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:37.752 09:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.752 09:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:37.752 09:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:37.752 09:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:37.752 09:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:37.752 09:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:37.752 09:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:38.016 09:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:38.275 09:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:32:39.217 09:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:32:39.217 09:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:39.217 09:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.217 09:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:39.217 09:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:39.217 09:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:39.217 09:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.217 09:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:39.478 09:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:39.478 09:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:39.478 09:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.478 09:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:39.739 09:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:39.739 09:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:39.739 09:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:39.739 09:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:39.739 09:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:39.739 09:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:39.739 09:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:39.739 09:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.000 09:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.000 09:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:40.000 09:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:40.000 09:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:40.260 09:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:40.260 09:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:32:40.260 09:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:40.260 09:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:40.520 09:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:32:41.458 09:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:32:41.458 09:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:41.458 09:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.458 09:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:41.718 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.718 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:41.718 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.718 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:41.977 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:41.977 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:41.977 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.977 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:41.977 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:41.977 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:41.977 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:41.977 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:42.236 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:42.236 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:42.236 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.236 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:42.496 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:42.496 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:42.496 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:42.496 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:42.496 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:42.496 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:32:42.496 09:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:42.757 09:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:42.757 09:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:32:44.139 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:32:44.139 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:44.139 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.139 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:44.139 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:44.139 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:44.139 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:44.139 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.139 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:44.139 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:44.139 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.139 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:44.399 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.399 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:44.399 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.399 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:44.660 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:44.660 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:44.660 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.660 09:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:44.660 09:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:44.660 09:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:44.660 09:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:44.660 09:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:44.921 09:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:44.921 09:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:32:44.921 09:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:32:44.921 09:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:45.181 09:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:32:46.122 09:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:32:46.122 09:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:46.122 09:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.122 09:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:46.383 09:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:46.383 09:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:46.383 09:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.383 09:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:46.642 09:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:46.642 09:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:46.642 09:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.642 09:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:46.642 09:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:46.642 09:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:46.642 09:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.642 09:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:46.901 09:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:46.901 09:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:32:46.901 09:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.901 09:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:46.901 09:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:46.901 09:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:46.901 09:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:46.901 09:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:47.162 09:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:47.162 09:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:32:47.423 09:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:32:47.423 09:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:47.423 09:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:47.683 09:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:48.625 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:48.625 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:48.625 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.625 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:48.884 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:48.884 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:48.884 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:48.884 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:49.144 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.144 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:49.145 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.145 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:49.145 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.145 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:49.145 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.145 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:49.405 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.405 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:49.405 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.405 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:49.667 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.667 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:49.667 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:49.667 09:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:49.667 09:45:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:49.667 09:45:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:49.667 09:45:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:49.927 09:45:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:50.187 09:45:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:51.128 09:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:51.129 09:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:51.129 09:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.129 09:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:51.129 09:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:51.129 09:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:51.389 09:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.389 09:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:51.389 09:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.389 09:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:51.389 09:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.389 09:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:51.650 09:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.650 09:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:51.650 09:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.650 09:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:51.650 09:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.650 09:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:51.650 09:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.650 09:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:51.911 09:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:51.911 09:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:51.911 09:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:51.911 09:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:52.172 09:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:52.172 09:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:52.172 09:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:52.172 09:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:52.432 09:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:53.373 09:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:53.373 09:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:53.373 09:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.373 09:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:53.633 09:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.633 09:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:53.633 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.633 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:53.633 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.633 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:53.633 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.633 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:53.894 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:53.894 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:53.894 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:53.894 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:54.155 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.155 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:54.155 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.155 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:54.155 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.155 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:54.155 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.155 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:54.414 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:54.415 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:54.415 09:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:54.674 09:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:54.674 09:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:56.058 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:56.058 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:56.058 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.058 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:56.058 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.058 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:56.058 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:56.058 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.058 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:56.058 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:56.058 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.058 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:56.319 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.319 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:56.319 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.319 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:56.580 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.580 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:56.580 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.580 09:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:56.580 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.580 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:56.580 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.580 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:56.841 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:56.841 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 478574 00:32:56.841 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 478574 ']' 00:32:56.841 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 478574 00:32:56.841 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:32:56.841 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:56.841 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 478574 00:32:56.841 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:32:56.841 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:32:56.841 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 478574' 00:32:56.841 killing process with pid 478574 00:32:56.841 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 478574 00:32:56.841 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 478574 00:32:56.841 Connection closed with partial response: 00:32:56.841 00:32:56.841 00:32:57.108 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 478574 00:32:57.108 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:57.108 [2024-05-16 09:45:22.687094] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:32:57.108 [2024-05-16 09:45:22.687152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478574 ] 00:32:57.108 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.108 [2024-05-16 09:45:22.736803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.108 [2024-05-16 09:45:22.788731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:57.108 Running I/O for 90 seconds... 00:32:57.108 [2024-05-16 09:45:36.118837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.108 [2024-05-16 09:45:36.118871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:57.108 [2024-05-16 09:45:36.118903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.108 [2024-05-16 09:45:36.118910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:57.108 [2024-05-16 09:45:36.118921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.108 [2024-05-16 09:45:36.118927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:57.108 [2024-05-16 09:45:36.118937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.108 [2024-05-16 09:45:36.118943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:57.108 [2024-05-16 09:45:36.118953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.108 [2024-05-16 09:45:36.118958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:57.108 [2024-05-16 09:45:36.118968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.108 [2024-05-16 09:45:36.118974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.108 [2024-05-16 09:45:36.118984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.108 [2024-05-16 09:45:36.118990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:57.108 [2024-05-16 09:45:36.119000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.108 [2024-05-16 09:45:36.119006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:57.108 [2024-05-16 09:45:36.119394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.108 [2024-05-16 09:45:36.119405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:57.108 [2024-05-16 09:45:36.119417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.108 [2024-05-16 09:45:36.119422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:57.108 [2024-05-16 09:45:36.119433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.108 [2024-05-16 09:45:36.119444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:57.108 [2024-05-16 09:45:36.119456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.108 [2024-05-16 09:45:36.119462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:57.108 [2024-05-16 09:45:36.119472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.108 [2024-05-16 09:45:36.119479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:57.108 [2024-05-16 09:45:36.119490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.108 [2024-05-16 09:45:36.119496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:57.108 [2024-05-16 09:45:36.119507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.108 [2024-05-16 09:45:36.119513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:57.108 [2024-05-16 09:45:36.119524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.108 [2024-05-16 09:45:36.119530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:57.108 [2024-05-16 09:45:36.119541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.119988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.119999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.120004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.120014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.120020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.120031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.120036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:57.109 [2024-05-16 09:45:36.120047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.109 [2024-05-16 09:45:36.120060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.110 [2024-05-16 09:45:36.120439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.110 [2024-05-16 09:45:36.120459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.110 [2024-05-16 09:45:36.120479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.110 [2024-05-16 09:45:36.120497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.110 [2024-05-16 09:45:36.120516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.110 [2024-05-16 09:45:36.120535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.110 [2024-05-16 09:45:36.120553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.110 [2024-05-16 09:45:36.120648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:57.110 [2024-05-16 09:45:36.120661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.120666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.120680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.120686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.120736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.120743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.120757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.120763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.120777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.120784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.120798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.120803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.120817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.120824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.120838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.120844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.120860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.120865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.120879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.120885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.120899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.120905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.120919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.120925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.120940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.120946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.120960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.120965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.120979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.120985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.120999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.121004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.121018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.121023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.121038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.121043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.121092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.121100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.121116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.121121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.121136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.121143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.121159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.121164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.121179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.121185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.121200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.121206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.121221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.121226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.121241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.121247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.121395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.121402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.121419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.121424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.121439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.121445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.121461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.121468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.121483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.111 [2024-05-16 09:45:36.121488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:57.111 [2024-05-16 09:45:36.121503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:36.121510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.121525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:36.121533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.121549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:36.121555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.121862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:36.121869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.121886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:36.121891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.121907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:36.121913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.121929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:36.121935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.121950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:36.121956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.121971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:36.121977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.121993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:36.121998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.122014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:36.122019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.122036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.112 [2024-05-16 09:45:36.122041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.122060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.112 [2024-05-16 09:45:36.122066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.122081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.112 [2024-05-16 09:45:36.122087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.122105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.112 [2024-05-16 09:45:36.122110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.122126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.112 [2024-05-16 09:45:36.122132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.122148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.112 [2024-05-16 09:45:36.122153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.122169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.112 [2024-05-16 09:45:36.122174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.122190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.112 [2024-05-16 09:45:36.122196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:36.122241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:36.122248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:48.183283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:48.183317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:48.183349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:48.183356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:48.183366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:48.183372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:48.183383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:48.183388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:48.183398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:48.183403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:48.183413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:48.183419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:48.183434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.112 [2024-05-16 09:45:48.183439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:48.183450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.112 [2024-05-16 09:45:48.183455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:48.183554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.112 [2024-05-16 09:45:48.183562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:48.183576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.112 [2024-05-16 09:45:48.183583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:48.183594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:48.183600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:57.112 [2024-05-16 09:45:48.183610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.112 [2024-05-16 09:45:48.183615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:57.113 [2024-05-16 09:45:48.183626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.113 [2024-05-16 09:45:48.183631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:57.113 [2024-05-16 09:45:48.183642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.113 [2024-05-16 09:45:48.183649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:57.113 [2024-05-16 09:45:48.183660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.113 [2024-05-16 09:45:48.183665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:57.113 [2024-05-16 09:45:48.183676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.113 [2024-05-16 09:45:48.183683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:57.113 [2024-05-16 09:45:48.183762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.113 [2024-05-16 09:45:48.183769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:57.113 [2024-05-16 09:45:48.183780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:57.113 [2024-05-16 09:45:48.183785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:57.113 [2024-05-16 09:45:48.183796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.113 [2024-05-16 09:45:48.183803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:57.113 [2024-05-16 09:45:48.183813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.113 [2024-05-16 09:45:48.183819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:57.113 [2024-05-16 09:45:48.183829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.113 [2024-05-16 09:45:48.183835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:57.113 [2024-05-16 09:45:48.183846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.113 [2024-05-16 09:45:48.183851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:57.113 Received shutdown signal, test time was about 25.652287 seconds 00:32:57.113 00:32:57.113 Latency(us) 00:32:57.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.113 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:57.113 Verification LBA range: start 0x0 length 0x4000 00:32:57.113 Nvme0n1 : 25.65 10742.18 41.96 0.00 0.00 11896.14 146.77 3019898.88 00:32:57.113 =================================================================================================================== 00:32:57.113 Total : 10742.18 41.96 0.00 0.00 11896.14 146.77 3019898.88 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:57.113 rmmod nvme_tcp 00:32:57.113 rmmod nvme_fabrics 00:32:57.113 rmmod nvme_keyring 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 478192 ']' 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 478192 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 478192 ']' 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 478192 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:57.113 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 478192 00:32:57.374 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:57.374 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:57.374 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 478192' 00:32:57.374 killing process with pid 478192 00:32:57.374 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 478192 00:32:57.374 [2024-05-16 09:45:50.709398] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:57.374 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 478192 00:32:57.374 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:57.374 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:57.374 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:57.374 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:57.374 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:57.374 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.374 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:57.374 09:45:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.923 09:45:52 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:59.923 00:32:59.923 real 0m39.209s 00:32:59.923 user 1m41.811s 00:32:59.923 sys 0m10.510s 00:32:59.923 09:45:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:59.923 09:45:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:59.923 ************************************ 00:32:59.923 END TEST nvmf_host_multipath_status 00:32:59.923 ************************************ 00:32:59.923 09:45:52 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:59.923 09:45:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:59.923 09:45:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:59.923 09:45:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:59.923 ************************************ 00:32:59.923 START TEST nvmf_discovery_remove_ifc 00:32:59.923 ************************************ 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:59.923 * Looking for test storage... 00:32:59.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:59.923 09:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:06.510 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:06.510 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:06.510 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:06.510 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:06.510 09:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:06.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:06.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:33:06.771 00:33:06.771 --- 10.0.0.2 ping statistics --- 00:33:06.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.771 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:06.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:06.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:33:06.771 00:33:06.771 --- 10.0.0.1 ping statistics --- 00:33:06.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.771 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=488286 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 488286 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:06.771 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 488286 ']' 00:33:06.772 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:06.772 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:06.772 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:06.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:06.772 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:06.772 09:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:07.032 [2024-05-16 09:46:00.331510] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:33:07.032 [2024-05-16 09:46:00.331575] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:07.032 EAL: No free 2048 kB hugepages reported on node 1 00:33:07.032 [2024-05-16 09:46:00.419983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.032 [2024-05-16 09:46:00.512177] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:07.032 [2024-05-16 09:46:00.512232] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:07.032 [2024-05-16 09:46:00.512240] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:07.032 [2024-05-16 09:46:00.512247] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:07.032 [2024-05-16 09:46:00.512253] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:07.032 [2024-05-16 09:46:00.512288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.604 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:07.604 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:33:07.604 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:07.604 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:07.604 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:07.604 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:07.604 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:07.604 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.604 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:07.864 [2024-05-16 09:46:01.169930] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:07.865 [2024-05-16 09:46:01.177899] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:07.865 [2024-05-16 09:46:01.178190] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:07.865 null0 00:33:07.865 [2024-05-16 09:46:01.210131] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.865 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.865 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=488522 00:33:07.865 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 488522 /tmp/host.sock 00:33:07.865 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 488522 ']' 00:33:07.865 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:07.865 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:33:07.865 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:07.865 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:07.865 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:07.865 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:07.865 09:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:07.865 [2024-05-16 09:46:01.285093] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:33:07.865 [2024-05-16 09:46:01.285155] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488522 ] 00:33:07.865 EAL: No free 2048 kB hugepages reported on node 1 00:33:07.865 [2024-05-16 09:46:01.349037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.865 [2024-05-16 09:46:01.423256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.805 09:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:08.805 09:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:33:08.805 09:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:08.805 09:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:08.805 09:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.805 09:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:08.805 09:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.805 09:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:08.805 09:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.805 09:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:08.805 09:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.805 09:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:08.805 09:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.805 09:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:09.745 [2024-05-16 09:46:03.198191] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:09.745 [2024-05-16 09:46:03.198212] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:09.745 [2024-05-16 09:46:03.198226] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:09.745 [2024-05-16 09:46:03.285510] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:10.006 [2024-05-16 09:46:03.512458] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:10.006 [2024-05-16 09:46:03.512513] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:10.006 [2024-05-16 09:46:03.512535] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:10.006 [2024-05-16 09:46:03.512549] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:10.006 [2024-05-16 09:46:03.512569] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:10.006 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.006 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:10.006 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:10.006 [2024-05-16 09:46:03.517876] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xe23e40 was disconnected and freed. delete nvme_qpair. 00:33:10.006 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:10.006 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:10.006 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.006 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:10.006 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:10.006 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:10.006 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.266 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:10.266 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:10.266 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:10.266 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:10.266 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:10.266 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:10.266 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:10.266 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.266 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:10.266 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:10.266 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:10.266 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.266 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:10.266 09:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:11.206 09:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:11.206 09:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:11.206 09:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:11.206 09:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.206 09:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:11.206 09:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:11.206 09:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:11.467 09:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.467 09:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:11.467 09:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:12.410 09:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:12.410 09:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:12.410 09:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:12.410 09:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.410 09:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:12.410 09:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:12.410 09:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:12.410 09:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.410 09:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:12.410 09:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:13.355 09:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:13.355 09:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:13.355 09:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:13.355 09:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.355 09:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:13.355 09:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:13.355 09:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:13.355 09:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.355 09:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:13.355 09:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:14.740 09:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:14.740 09:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:14.740 09:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:14.740 09:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.740 09:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:14.740 09:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:14.740 09:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:14.740 09:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.740 09:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:14.740 09:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:15.682 [2024-05-16 09:46:08.952943] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:15.682 [2024-05-16 09:46:08.952982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.682 [2024-05-16 09:46:08.952995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.682 [2024-05-16 09:46:08.953004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.682 [2024-05-16 09:46:08.953012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.682 [2024-05-16 09:46:08.953020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.682 [2024-05-16 09:46:08.953027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.682 [2024-05-16 09:46:08.953035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.682 [2024-05-16 09:46:08.953042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.682 [2024-05-16 09:46:08.953051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.682 [2024-05-16 09:46:08.953061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.682 [2024-05-16 09:46:08.953069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeb1b0 is same with the state(5) to be set 00:33:15.682 [2024-05-16 09:46:08.962964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeb1b0 (9): Bad file descriptor 00:33:15.682 [2024-05-16 09:46:08.973005] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:15.682 09:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:15.682 09:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:15.682 09:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:15.682 09:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.682 09:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:15.682 09:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:15.682 09:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:16.625 [2024-05-16 09:46:10.039093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:17.566 [2024-05-16 09:46:11.063090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:17.566 [2024-05-16 09:46:11.063133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeb1b0 with addr=10.0.0.2, port=4420 00:33:17.566 [2024-05-16 09:46:11.063146] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeb1b0 is same with the state(5) to be set 00:33:17.566 [2024-05-16 09:46:11.063506] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeb1b0 (9): Bad file descriptor 00:33:17.566 [2024-05-16 09:46:11.063531] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:17.566 [2024-05-16 09:46:11.063553] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:17.566 [2024-05-16 09:46:11.063576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.566 [2024-05-16 09:46:11.063587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.566 [2024-05-16 09:46:11.063597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.566 [2024-05-16 09:46:11.063605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.566 [2024-05-16 09:46:11.063614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.566 [2024-05-16 09:46:11.063621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.566 [2024-05-16 09:46:11.063630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.566 [2024-05-16 09:46:11.063637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.566 [2024-05-16 09:46:11.063645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.566 [2024-05-16 09:46:11.063653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.566 [2024-05-16 09:46:11.063660] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:17.566 [2024-05-16 09:46:11.064169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdea640 (9): Bad file descriptor 00:33:17.566 [2024-05-16 09:46:11.065179] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:17.566 [2024-05-16 09:46:11.065191] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:33:17.566 09:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.566 09:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:17.566 09:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:18.949 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:18.949 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:18.950 09:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:19.889 [2024-05-16 09:46:13.124941] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:19.889 [2024-05-16 09:46:13.124961] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:19.889 [2024-05-16 09:46:13.124975] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:19.889 [2024-05-16 09:46:13.254450] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:19.889 09:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:19.889 09:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:19.889 09:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:19.889 09:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.889 09:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:19.889 09:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:19.889 09:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:19.889 09:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.889 [2024-05-16 09:46:13.355269] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:19.889 [2024-05-16 09:46:13.355305] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:19.889 [2024-05-16 09:46:13.355324] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:19.889 [2024-05-16 09:46:13.355338] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:19.889 [2024-05-16 09:46:13.355346] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:19.889 09:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:19.889 09:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:19.889 [2024-05-16 09:46:13.362224] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xdf8010 was disconnected and freed. delete nvme_qpair. 00:33:20.828 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:20.828 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:20.828 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:20.828 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.828 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:20.828 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:20.828 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:20.828 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 488522 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 488522 ']' 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 488522 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 488522 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 488522' 00:33:21.088 killing process with pid 488522 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 488522 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 488522 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:21.088 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:21.088 rmmod nvme_tcp 00:33:21.088 rmmod nvme_fabrics 00:33:21.088 rmmod nvme_keyring 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 488286 ']' 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 488286 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 488286 ']' 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 488286 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 488286 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 488286' 00:33:21.348 killing process with pid 488286 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 488286 00:33:21.348 [2024-05-16 09:46:14.724954] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 488286 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:21.348 09:46:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.893 09:46:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:23.893 00:33:23.893 real 0m23.896s 00:33:23.893 user 0m28.402s 00:33:23.893 sys 0m6.455s 00:33:23.893 09:46:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:23.893 09:46:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:23.893 ************************************ 00:33:23.893 END TEST nvmf_discovery_remove_ifc 00:33:23.893 ************************************ 00:33:23.893 09:46:16 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:23.893 09:46:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:23.893 09:46:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:23.893 09:46:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:23.893 ************************************ 00:33:23.893 START TEST nvmf_identify_kernel_target 00:33:23.893 ************************************ 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:23.893 * Looking for test storage... 00:33:23.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.893 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:33:23.894 09:46:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:30.485 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:30.485 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:30.485 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:30.486 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:30.486 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:30.486 09:46:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:30.486 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:30.486 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:30.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:30.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:33:30.747 00:33:30.747 --- 10.0.0.2 ping statistics --- 00:33:30.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.747 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:30.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:30.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:33:30.747 00:33:30.747 --- 10.0.0.1 ping statistics --- 00:33:30.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.747 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:30.747 09:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:34.048 Waiting for block devices as requested 00:33:34.048 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:34.048 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:34.308 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:34.308 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:34.308 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:34.569 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:34.569 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:34.569 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:34.830 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:34.830 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:34.830 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:35.090 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:35.090 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:35.090 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:35.351 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:35.351 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:35.351 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:35.612 No valid GPT data, bailing 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:35.612 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.1 -t tcp -s 4420 00:33:35.875 00:33:35.875 Discovery Log Number of Records 2, Generation counter 2 00:33:35.875 =====Discovery Log Entry 0====== 00:33:35.875 trtype: tcp 00:33:35.875 adrfam: ipv4 00:33:35.875 subtype: current discovery subsystem 00:33:35.875 treq: not specified, sq flow control disable supported 00:33:35.875 portid: 1 00:33:35.875 trsvcid: 4420 00:33:35.875 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:35.875 traddr: 10.0.0.1 00:33:35.875 eflags: none 00:33:35.875 sectype: none 00:33:35.875 =====Discovery Log Entry 1====== 00:33:35.875 trtype: tcp 00:33:35.875 adrfam: ipv4 00:33:35.875 subtype: nvme subsystem 00:33:35.875 treq: not specified, sq flow control disable supported 00:33:35.875 portid: 1 00:33:35.875 trsvcid: 4420 00:33:35.875 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:35.875 traddr: 10.0.0.1 00:33:35.875 eflags: none 00:33:35.875 sectype: none 00:33:35.875 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:35.875 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:35.875 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.875 ===================================================== 00:33:35.875 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:35.875 ===================================================== 00:33:35.875 Controller Capabilities/Features 00:33:35.875 ================================ 00:33:35.875 Vendor ID: 0000 00:33:35.875 Subsystem Vendor ID: 0000 00:33:35.875 Serial Number: 39f0f960b1ef6dc94004 00:33:35.875 Model Number: Linux 00:33:35.875 Firmware Version: 6.7.0-68 00:33:35.875 Recommended Arb Burst: 0 00:33:35.875 IEEE OUI Identifier: 00 00 00 00:33:35.875 Multi-path I/O 00:33:35.875 May have multiple subsystem ports: No 00:33:35.875 May have multiple controllers: No 00:33:35.875 Associated with SR-IOV VF: No 00:33:35.875 Max Data Transfer Size: Unlimited 00:33:35.875 Max Number of Namespaces: 0 00:33:35.875 Max Number of I/O Queues: 1024 00:33:35.875 NVMe Specification Version (VS): 1.3 00:33:35.875 NVMe Specification Version (Identify): 1.3 00:33:35.875 Maximum Queue Entries: 1024 00:33:35.875 Contiguous Queues Required: No 00:33:35.875 Arbitration Mechanisms Supported 00:33:35.875 Weighted Round Robin: Not Supported 00:33:35.875 Vendor Specific: Not Supported 00:33:35.875 Reset Timeout: 7500 ms 00:33:35.875 Doorbell Stride: 4 bytes 00:33:35.875 NVM Subsystem Reset: Not Supported 00:33:35.875 Command Sets Supported 00:33:35.875 NVM Command Set: Supported 00:33:35.875 Boot Partition: Not Supported 00:33:35.875 Memory Page Size Minimum: 4096 bytes 00:33:35.875 Memory Page Size Maximum: 4096 bytes 00:33:35.875 Persistent Memory Region: Not Supported 00:33:35.875 Optional Asynchronous Events Supported 00:33:35.875 Namespace Attribute Notices: Not Supported 00:33:35.875 Firmware Activation Notices: Not Supported 00:33:35.875 ANA Change Notices: Not Supported 00:33:35.875 PLE Aggregate Log Change Notices: Not Supported 00:33:35.875 LBA Status Info Alert Notices: Not Supported 00:33:35.875 EGE Aggregate Log Change Notices: Not Supported 00:33:35.875 Normal NVM Subsystem Shutdown event: Not Supported 00:33:35.875 Zone Descriptor Change Notices: Not Supported 00:33:35.875 Discovery Log Change Notices: Supported 00:33:35.875 Controller Attributes 00:33:35.875 128-bit Host Identifier: Not Supported 00:33:35.875 Non-Operational Permissive Mode: Not Supported 00:33:35.875 NVM Sets: Not Supported 00:33:35.875 Read Recovery Levels: Not Supported 00:33:35.875 Endurance Groups: Not Supported 00:33:35.875 Predictable Latency Mode: Not Supported 00:33:35.875 Traffic Based Keep ALive: Not Supported 00:33:35.875 Namespace Granularity: Not Supported 00:33:35.875 SQ Associations: Not Supported 00:33:35.875 UUID List: Not Supported 00:33:35.875 Multi-Domain Subsystem: Not Supported 00:33:35.875 Fixed Capacity Management: Not Supported 00:33:35.875 Variable Capacity Management: Not Supported 00:33:35.875 Delete Endurance Group: Not Supported 00:33:35.875 Delete NVM Set: Not Supported 00:33:35.875 Extended LBA Formats Supported: Not Supported 00:33:35.875 Flexible Data Placement Supported: Not Supported 00:33:35.875 00:33:35.875 Controller Memory Buffer Support 00:33:35.875 ================================ 00:33:35.875 Supported: No 00:33:35.875 00:33:35.875 Persistent Memory Region Support 00:33:35.875 ================================ 00:33:35.875 Supported: No 00:33:35.875 00:33:35.875 Admin Command Set Attributes 00:33:35.875 ============================ 00:33:35.875 Security Send/Receive: Not Supported 00:33:35.875 Format NVM: Not Supported 00:33:35.875 Firmware Activate/Download: Not Supported 00:33:35.875 Namespace Management: Not Supported 00:33:35.875 Device Self-Test: Not Supported 00:33:35.875 Directives: Not Supported 00:33:35.875 NVMe-MI: Not Supported 00:33:35.875 Virtualization Management: Not Supported 00:33:35.875 Doorbell Buffer Config: Not Supported 00:33:35.875 Get LBA Status Capability: Not Supported 00:33:35.875 Command & Feature Lockdown Capability: Not Supported 00:33:35.875 Abort Command Limit: 1 00:33:35.875 Async Event Request Limit: 1 00:33:35.875 Number of Firmware Slots: N/A 00:33:35.875 Firmware Slot 1 Read-Only: N/A 00:33:35.875 Firmware Activation Without Reset: N/A 00:33:35.875 Multiple Update Detection Support: N/A 00:33:35.875 Firmware Update Granularity: No Information Provided 00:33:35.875 Per-Namespace SMART Log: No 00:33:35.875 Asymmetric Namespace Access Log Page: Not Supported 00:33:35.875 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:35.875 Command Effects Log Page: Not Supported 00:33:35.875 Get Log Page Extended Data: Supported 00:33:35.875 Telemetry Log Pages: Not Supported 00:33:35.875 Persistent Event Log Pages: Not Supported 00:33:35.875 Supported Log Pages Log Page: May Support 00:33:35.875 Commands Supported & Effects Log Page: Not Supported 00:33:35.875 Feature Identifiers & Effects Log Page:May Support 00:33:35.875 NVMe-MI Commands & Effects Log Page: May Support 00:33:35.875 Data Area 4 for Telemetry Log: Not Supported 00:33:35.875 Error Log Page Entries Supported: 1 00:33:35.875 Keep Alive: Not Supported 00:33:35.875 00:33:35.875 NVM Command Set Attributes 00:33:35.875 ========================== 00:33:35.875 Submission Queue Entry Size 00:33:35.875 Max: 1 00:33:35.875 Min: 1 00:33:35.875 Completion Queue Entry Size 00:33:35.875 Max: 1 00:33:35.875 Min: 1 00:33:35.875 Number of Namespaces: 0 00:33:35.875 Compare Command: Not Supported 00:33:35.875 Write Uncorrectable Command: Not Supported 00:33:35.875 Dataset Management Command: Not Supported 00:33:35.875 Write Zeroes Command: Not Supported 00:33:35.875 Set Features Save Field: Not Supported 00:33:35.875 Reservations: Not Supported 00:33:35.875 Timestamp: Not Supported 00:33:35.875 Copy: Not Supported 00:33:35.875 Volatile Write Cache: Not Present 00:33:35.875 Atomic Write Unit (Normal): 1 00:33:35.875 Atomic Write Unit (PFail): 1 00:33:35.875 Atomic Compare & Write Unit: 1 00:33:35.875 Fused Compare & Write: Not Supported 00:33:35.875 Scatter-Gather List 00:33:35.875 SGL Command Set: Supported 00:33:35.875 SGL Keyed: Not Supported 00:33:35.875 SGL Bit Bucket Descriptor: Not Supported 00:33:35.875 SGL Metadata Pointer: Not Supported 00:33:35.875 Oversized SGL: Not Supported 00:33:35.875 SGL Metadata Address: Not Supported 00:33:35.875 SGL Offset: Supported 00:33:35.875 Transport SGL Data Block: Not Supported 00:33:35.875 Replay Protected Memory Block: Not Supported 00:33:35.875 00:33:35.875 Firmware Slot Information 00:33:35.875 ========================= 00:33:35.875 Active slot: 0 00:33:35.875 00:33:35.875 00:33:35.875 Error Log 00:33:35.875 ========= 00:33:35.875 00:33:35.875 Active Namespaces 00:33:35.875 ================= 00:33:35.875 Discovery Log Page 00:33:35.875 ================== 00:33:35.875 Generation Counter: 2 00:33:35.875 Number of Records: 2 00:33:35.875 Record Format: 0 00:33:35.875 00:33:35.875 Discovery Log Entry 0 00:33:35.875 ---------------------- 00:33:35.875 Transport Type: 3 (TCP) 00:33:35.875 Address Family: 1 (IPv4) 00:33:35.875 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:35.875 Entry Flags: 00:33:35.876 Duplicate Returned Information: 0 00:33:35.876 Explicit Persistent Connection Support for Discovery: 0 00:33:35.876 Transport Requirements: 00:33:35.876 Secure Channel: Not Specified 00:33:35.876 Port ID: 1 (0x0001) 00:33:35.876 Controller ID: 65535 (0xffff) 00:33:35.876 Admin Max SQ Size: 32 00:33:35.876 Transport Service Identifier: 4420 00:33:35.876 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:35.876 Transport Address: 10.0.0.1 00:33:35.876 Discovery Log Entry 1 00:33:35.876 ---------------------- 00:33:35.876 Transport Type: 3 (TCP) 00:33:35.876 Address Family: 1 (IPv4) 00:33:35.876 Subsystem Type: 2 (NVM Subsystem) 00:33:35.876 Entry Flags: 00:33:35.876 Duplicate Returned Information: 0 00:33:35.876 Explicit Persistent Connection Support for Discovery: 0 00:33:35.876 Transport Requirements: 00:33:35.876 Secure Channel: Not Specified 00:33:35.876 Port ID: 1 (0x0001) 00:33:35.876 Controller ID: 65535 (0xffff) 00:33:35.876 Admin Max SQ Size: 32 00:33:35.876 Transport Service Identifier: 4420 00:33:35.876 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:35.876 Transport Address: 10.0.0.1 00:33:35.876 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:35.876 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.876 get_feature(0x01) failed 00:33:35.876 get_feature(0x02) failed 00:33:35.876 get_feature(0x04) failed 00:33:35.876 ===================================================== 00:33:35.876 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:35.876 ===================================================== 00:33:35.876 Controller Capabilities/Features 00:33:35.876 ================================ 00:33:35.876 Vendor ID: 0000 00:33:35.876 Subsystem Vendor ID: 0000 00:33:35.876 Serial Number: e034e1931dd972015d1c 00:33:35.876 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:35.876 Firmware Version: 6.7.0-68 00:33:35.876 Recommended Arb Burst: 6 00:33:35.876 IEEE OUI Identifier: 00 00 00 00:33:35.876 Multi-path I/O 00:33:35.876 May have multiple subsystem ports: Yes 00:33:35.876 May have multiple controllers: Yes 00:33:35.876 Associated with SR-IOV VF: No 00:33:35.876 Max Data Transfer Size: Unlimited 00:33:35.876 Max Number of Namespaces: 1024 00:33:35.876 Max Number of I/O Queues: 128 00:33:35.876 NVMe Specification Version (VS): 1.3 00:33:35.876 NVMe Specification Version (Identify): 1.3 00:33:35.876 Maximum Queue Entries: 1024 00:33:35.876 Contiguous Queues Required: No 00:33:35.876 Arbitration Mechanisms Supported 00:33:35.876 Weighted Round Robin: Not Supported 00:33:35.876 Vendor Specific: Not Supported 00:33:35.876 Reset Timeout: 7500 ms 00:33:35.876 Doorbell Stride: 4 bytes 00:33:35.876 NVM Subsystem Reset: Not Supported 00:33:35.876 Command Sets Supported 00:33:35.876 NVM Command Set: Supported 00:33:35.876 Boot Partition: Not Supported 00:33:35.876 Memory Page Size Minimum: 4096 bytes 00:33:35.876 Memory Page Size Maximum: 4096 bytes 00:33:35.876 Persistent Memory Region: Not Supported 00:33:35.876 Optional Asynchronous Events Supported 00:33:35.876 Namespace Attribute Notices: Supported 00:33:35.876 Firmware Activation Notices: Not Supported 00:33:35.876 ANA Change Notices: Supported 00:33:35.876 PLE Aggregate Log Change Notices: Not Supported 00:33:35.876 LBA Status Info Alert Notices: Not Supported 00:33:35.876 EGE Aggregate Log Change Notices: Not Supported 00:33:35.876 Normal NVM Subsystem Shutdown event: Not Supported 00:33:35.876 Zone Descriptor Change Notices: Not Supported 00:33:35.876 Discovery Log Change Notices: Not Supported 00:33:35.876 Controller Attributes 00:33:35.876 128-bit Host Identifier: Supported 00:33:35.876 Non-Operational Permissive Mode: Not Supported 00:33:35.876 NVM Sets: Not Supported 00:33:35.876 Read Recovery Levels: Not Supported 00:33:35.876 Endurance Groups: Not Supported 00:33:35.876 Predictable Latency Mode: Not Supported 00:33:35.876 Traffic Based Keep ALive: Supported 00:33:35.876 Namespace Granularity: Not Supported 00:33:35.876 SQ Associations: Not Supported 00:33:35.876 UUID List: Not Supported 00:33:35.876 Multi-Domain Subsystem: Not Supported 00:33:35.876 Fixed Capacity Management: Not Supported 00:33:35.876 Variable Capacity Management: Not Supported 00:33:35.876 Delete Endurance Group: Not Supported 00:33:35.876 Delete NVM Set: Not Supported 00:33:35.876 Extended LBA Formats Supported: Not Supported 00:33:35.876 Flexible Data Placement Supported: Not Supported 00:33:35.876 00:33:35.876 Controller Memory Buffer Support 00:33:35.876 ================================ 00:33:35.876 Supported: No 00:33:35.876 00:33:35.876 Persistent Memory Region Support 00:33:35.876 ================================ 00:33:35.876 Supported: No 00:33:35.876 00:33:35.876 Admin Command Set Attributes 00:33:35.876 ============================ 00:33:35.876 Security Send/Receive: Not Supported 00:33:35.876 Format NVM: Not Supported 00:33:35.876 Firmware Activate/Download: Not Supported 00:33:35.876 Namespace Management: Not Supported 00:33:35.876 Device Self-Test: Not Supported 00:33:35.876 Directives: Not Supported 00:33:35.876 NVMe-MI: Not Supported 00:33:35.876 Virtualization Management: Not Supported 00:33:35.876 Doorbell Buffer Config: Not Supported 00:33:35.876 Get LBA Status Capability: Not Supported 00:33:35.876 Command & Feature Lockdown Capability: Not Supported 00:33:35.876 Abort Command Limit: 4 00:33:35.876 Async Event Request Limit: 4 00:33:35.876 Number of Firmware Slots: N/A 00:33:35.876 Firmware Slot 1 Read-Only: N/A 00:33:35.876 Firmware Activation Without Reset: N/A 00:33:35.876 Multiple Update Detection Support: N/A 00:33:35.876 Firmware Update Granularity: No Information Provided 00:33:35.876 Per-Namespace SMART Log: Yes 00:33:35.876 Asymmetric Namespace Access Log Page: Supported 00:33:35.876 ANA Transition Time : 10 sec 00:33:35.876 00:33:35.876 Asymmetric Namespace Access Capabilities 00:33:35.876 ANA Optimized State : Supported 00:33:35.876 ANA Non-Optimized State : Supported 00:33:35.876 ANA Inaccessible State : Supported 00:33:35.876 ANA Persistent Loss State : Supported 00:33:35.876 ANA Change State : Supported 00:33:35.876 ANAGRPID is not changed : No 00:33:35.876 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:35.876 00:33:35.876 ANA Group Identifier Maximum : 128 00:33:35.876 Number of ANA Group Identifiers : 128 00:33:35.876 Max Number of Allowed Namespaces : 1024 00:33:35.876 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:35.876 Command Effects Log Page: Supported 00:33:35.876 Get Log Page Extended Data: Supported 00:33:35.876 Telemetry Log Pages: Not Supported 00:33:35.876 Persistent Event Log Pages: Not Supported 00:33:35.876 Supported Log Pages Log Page: May Support 00:33:35.876 Commands Supported & Effects Log Page: Not Supported 00:33:35.876 Feature Identifiers & Effects Log Page:May Support 00:33:35.876 NVMe-MI Commands & Effects Log Page: May Support 00:33:35.876 Data Area 4 for Telemetry Log: Not Supported 00:33:35.876 Error Log Page Entries Supported: 128 00:33:35.876 Keep Alive: Supported 00:33:35.876 Keep Alive Granularity: 1000 ms 00:33:35.876 00:33:35.876 NVM Command Set Attributes 00:33:35.876 ========================== 00:33:35.876 Submission Queue Entry Size 00:33:35.876 Max: 64 00:33:35.876 Min: 64 00:33:35.876 Completion Queue Entry Size 00:33:35.876 Max: 16 00:33:35.876 Min: 16 00:33:35.876 Number of Namespaces: 1024 00:33:35.876 Compare Command: Not Supported 00:33:35.876 Write Uncorrectable Command: Not Supported 00:33:35.876 Dataset Management Command: Supported 00:33:35.876 Write Zeroes Command: Supported 00:33:35.876 Set Features Save Field: Not Supported 00:33:35.876 Reservations: Not Supported 00:33:35.876 Timestamp: Not Supported 00:33:35.876 Copy: Not Supported 00:33:35.876 Volatile Write Cache: Present 00:33:35.876 Atomic Write Unit (Normal): 1 00:33:35.876 Atomic Write Unit (PFail): 1 00:33:35.876 Atomic Compare & Write Unit: 1 00:33:35.876 Fused Compare & Write: Not Supported 00:33:35.876 Scatter-Gather List 00:33:35.876 SGL Command Set: Supported 00:33:35.876 SGL Keyed: Not Supported 00:33:35.876 SGL Bit Bucket Descriptor: Not Supported 00:33:35.876 SGL Metadata Pointer: Not Supported 00:33:35.876 Oversized SGL: Not Supported 00:33:35.876 SGL Metadata Address: Not Supported 00:33:35.876 SGL Offset: Supported 00:33:35.876 Transport SGL Data Block: Not Supported 00:33:35.876 Replay Protected Memory Block: Not Supported 00:33:35.876 00:33:35.876 Firmware Slot Information 00:33:35.876 ========================= 00:33:35.876 Active slot: 0 00:33:35.876 00:33:35.876 Asymmetric Namespace Access 00:33:35.876 =========================== 00:33:35.876 Change Count : 0 00:33:35.876 Number of ANA Group Descriptors : 1 00:33:35.876 ANA Group Descriptor : 0 00:33:35.876 ANA Group ID : 1 00:33:35.876 Number of NSID Values : 1 00:33:35.876 Change Count : 0 00:33:35.876 ANA State : 1 00:33:35.876 Namespace Identifier : 1 00:33:35.876 00:33:35.876 Commands Supported and Effects 00:33:35.876 ============================== 00:33:35.876 Admin Commands 00:33:35.876 -------------- 00:33:35.876 Get Log Page (02h): Supported 00:33:35.876 Identify (06h): Supported 00:33:35.876 Abort (08h): Supported 00:33:35.876 Set Features (09h): Supported 00:33:35.876 Get Features (0Ah): Supported 00:33:35.876 Asynchronous Event Request (0Ch): Supported 00:33:35.876 Keep Alive (18h): Supported 00:33:35.876 I/O Commands 00:33:35.876 ------------ 00:33:35.876 Flush (00h): Supported 00:33:35.876 Write (01h): Supported LBA-Change 00:33:35.876 Read (02h): Supported 00:33:35.876 Write Zeroes (08h): Supported LBA-Change 00:33:35.876 Dataset Management (09h): Supported 00:33:35.876 00:33:35.876 Error Log 00:33:35.876 ========= 00:33:35.876 Entry: 0 00:33:35.876 Error Count: 0x3 00:33:35.876 Submission Queue Id: 0x0 00:33:35.876 Command Id: 0x5 00:33:35.876 Phase Bit: 0 00:33:35.876 Status Code: 0x2 00:33:35.876 Status Code Type: 0x0 00:33:35.876 Do Not Retry: 1 00:33:35.876 Error Location: 0x28 00:33:35.876 LBA: 0x0 00:33:35.876 Namespace: 0x0 00:33:35.876 Vendor Log Page: 0x0 00:33:35.876 ----------- 00:33:35.876 Entry: 1 00:33:35.876 Error Count: 0x2 00:33:35.876 Submission Queue Id: 0x0 00:33:35.876 Command Id: 0x5 00:33:35.876 Phase Bit: 0 00:33:35.876 Status Code: 0x2 00:33:35.876 Status Code Type: 0x0 00:33:35.876 Do Not Retry: 1 00:33:35.876 Error Location: 0x28 00:33:35.876 LBA: 0x0 00:33:35.876 Namespace: 0x0 00:33:35.876 Vendor Log Page: 0x0 00:33:35.876 ----------- 00:33:35.876 Entry: 2 00:33:35.876 Error Count: 0x1 00:33:35.876 Submission Queue Id: 0x0 00:33:35.876 Command Id: 0x4 00:33:35.876 Phase Bit: 0 00:33:35.876 Status Code: 0x2 00:33:35.876 Status Code Type: 0x0 00:33:35.876 Do Not Retry: 1 00:33:35.876 Error Location: 0x28 00:33:35.876 LBA: 0x0 00:33:35.876 Namespace: 0x0 00:33:35.876 Vendor Log Page: 0x0 00:33:35.877 00:33:35.877 Number of Queues 00:33:35.877 ================ 00:33:35.877 Number of I/O Submission Queues: 128 00:33:35.877 Number of I/O Completion Queues: 128 00:33:35.877 00:33:35.877 ZNS Specific Controller Data 00:33:35.877 ============================ 00:33:35.877 Zone Append Size Limit: 0 00:33:35.877 00:33:35.877 00:33:35.877 Active Namespaces 00:33:35.877 ================= 00:33:35.877 get_feature(0x05) failed 00:33:35.877 Namespace ID:1 00:33:35.877 Command Set Identifier: NVM (00h) 00:33:35.877 Deallocate: Supported 00:33:35.877 Deallocated/Unwritten Error: Not Supported 00:33:35.877 Deallocated Read Value: Unknown 00:33:35.877 Deallocate in Write Zeroes: Not Supported 00:33:35.877 Deallocated Guard Field: 0xFFFF 00:33:35.877 Flush: Supported 00:33:35.877 Reservation: Not Supported 00:33:35.877 Namespace Sharing Capabilities: Multiple Controllers 00:33:35.877 Size (in LBAs): 3750748848 (1788GiB) 00:33:35.877 Capacity (in LBAs): 3750748848 (1788GiB) 00:33:35.877 Utilization (in LBAs): 3750748848 (1788GiB) 00:33:35.877 UUID: 138d5735-6366-421b-b299-1591a475bce4 00:33:35.877 Thin Provisioning: Not Supported 00:33:35.877 Per-NS Atomic Units: Yes 00:33:35.877 Atomic Write Unit (Normal): 8 00:33:35.877 Atomic Write Unit (PFail): 8 00:33:35.877 Preferred Write Granularity: 8 00:33:35.877 Atomic Compare & Write Unit: 8 00:33:35.877 Atomic Boundary Size (Normal): 0 00:33:35.877 Atomic Boundary Size (PFail): 0 00:33:35.877 Atomic Boundary Offset: 0 00:33:35.877 NGUID/EUI64 Never Reused: No 00:33:35.877 ANA group ID: 1 00:33:35.877 Namespace Write Protected: No 00:33:35.877 Number of LBA Formats: 1 00:33:35.877 Current LBA Format: LBA Format #00 00:33:35.877 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:35.877 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:35.877 rmmod nvme_tcp 00:33:35.877 rmmod nvme_fabrics 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:35.877 09:46:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.423 09:46:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:38.423 09:46:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:38.423 09:46:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:38.423 09:46:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:33:38.423 09:46:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:38.423 09:46:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:38.423 09:46:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:38.423 09:46:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:38.423 09:46:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:38.423 09:46:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:38.423 09:46:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:40.971 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:40.971 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:40.971 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:40.971 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:40.971 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:40.971 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:40.971 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:40.971 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:40.971 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:40.971 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:41.231 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:41.231 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:41.231 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:41.231 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:41.231 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:41.231 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:41.231 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:41.492 00:33:41.492 real 0m17.967s 00:33:41.492 user 0m4.726s 00:33:41.492 sys 0m10.169s 00:33:41.492 09:46:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:41.492 09:46:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:41.492 ************************************ 00:33:41.492 END TEST nvmf_identify_kernel_target 00:33:41.492 ************************************ 00:33:41.492 09:46:35 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:41.492 09:46:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:41.492 09:46:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:41.492 09:46:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:41.753 ************************************ 00:33:41.753 START TEST nvmf_auth_host 00:33:41.753 ************************************ 00:33:41.753 09:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:41.753 * Looking for test storage... 00:33:41.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:41.753 09:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:41.753 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:41.753 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:41.753 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:41.753 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:41.753 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:41.753 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:41.753 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:41.753 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:41.753 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:41.753 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:33:41.754 09:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:48.345 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:48.345 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:48.345 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:48.345 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:48.345 09:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:48.607 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:48.607 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:48.607 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:48.607 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:48.607 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:48.607 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:48.607 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:48.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:48.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:33:48.607 00:33:48.607 --- 10.0.0.2 ping statistics --- 00:33:48.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.607 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:33:48.607 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:48.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:48.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:33:48.607 00:33:48.607 --- 10.0.0.1 ping statistics --- 00:33:48.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.607 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:33:48.607 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:48.607 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:33:48.607 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:48.607 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:48.607 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:48.607 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:48.607 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:48.607 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:48.607 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:48.868 09:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:48.868 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:48.868 09:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:48.868 09:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.868 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=502769 00:33:48.868 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 502769 00:33:48.868 09:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:48.868 09:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 502769 ']' 00:33:48.868 09:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.868 09:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:48.868 09:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.868 09:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:48.868 09:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=06e4e4ac1129666fad2b079cb2eadd5e 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.JyS 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 06e4e4ac1129666fad2b079cb2eadd5e 0 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 06e4e4ac1129666fad2b079cb2eadd5e 0 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=06e4e4ac1129666fad2b079cb2eadd5e 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.JyS 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.JyS 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.JyS 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5404c0f99542e4bf1c9cf3543ae6c41dd8c84665362c8ff44e36582c98d74129 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.3ZV 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5404c0f99542e4bf1c9cf3543ae6c41dd8c84665362c8ff44e36582c98d74129 3 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5404c0f99542e4bf1c9cf3543ae6c41dd8c84665362c8ff44e36582c98d74129 3 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5404c0f99542e4bf1c9cf3543ae6c41dd8c84665362c8ff44e36582c98d74129 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.3ZV 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.3ZV 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.3ZV 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:49.811 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e8aa3063a4f1a6fdfabc74dec3cdaab9e67092f60d14d16f 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xap 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e8aa3063a4f1a6fdfabc74dec3cdaab9e67092f60d14d16f 0 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e8aa3063a4f1a6fdfabc74dec3cdaab9e67092f60d14d16f 0 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e8aa3063a4f1a6fdfabc74dec3cdaab9e67092f60d14d16f 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xap 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xap 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.xap 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=40201f619b75f4649ae29a151fc973d20ac0b7577338c3ce 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.s5t 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 40201f619b75f4649ae29a151fc973d20ac0b7577338c3ce 2 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 40201f619b75f4649ae29a151fc973d20ac0b7577338c3ce 2 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=40201f619b75f4649ae29a151fc973d20ac0b7577338c3ce 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.s5t 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.s5t 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.s5t 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=aecfa04db01247fcdb9657d586eaca2b 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.5gU 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key aecfa04db01247fcdb9657d586eaca2b 1 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 aecfa04db01247fcdb9657d586eaca2b 1 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=aecfa04db01247fcdb9657d586eaca2b 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:49.812 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.5gU 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.5gU 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.5gU 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3dc23c62d21af6c1f4316d9e5616e377 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0ps 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3dc23c62d21af6c1f4316d9e5616e377 1 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3dc23c62d21af6c1f4316d9e5616e377 1 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3dc23c62d21af6c1f4316d9e5616e377 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0ps 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0ps 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.0ps 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c128ec73b88501785f1a74862306039ab81612efe297d4e3 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.MOI 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c128ec73b88501785f1a74862306039ab81612efe297d4e3 2 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c128ec73b88501785f1a74862306039ab81612efe297d4e3 2 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:50.073 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c128ec73b88501785f1a74862306039ab81612efe297d4e3 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.MOI 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.MOI 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.MOI 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=417a9aa7bcd5c69075594818d2e4b5e4 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Jxr 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 417a9aa7bcd5c69075594818d2e4b5e4 0 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 417a9aa7bcd5c69075594818d2e4b5e4 0 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=417a9aa7bcd5c69075594818d2e4b5e4 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Jxr 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Jxr 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Jxr 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=af70a9dcdcdbc24aa48f269d77e537c904782576c5fe902ccaba2ccee77a7508 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.51z 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key af70a9dcdcdbc24aa48f269d77e537c904782576c5fe902ccaba2ccee77a7508 3 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 af70a9dcdcdbc24aa48f269d77e537c904782576c5fe902ccaba2ccee77a7508 3 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=af70a9dcdcdbc24aa48f269d77e537c904782576c5fe902ccaba2ccee77a7508 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:50.074 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.51z 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.51z 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.51z 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 502769 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 502769 ']' 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.JyS 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.3ZV ]] 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3ZV 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.335 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.xap 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.s5t ]] 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.s5t 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.5gU 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.0ps ]] 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0ps 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.MOI 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.336 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Jxr ]] 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Jxr 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.51z 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:50.596 09:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:53.896 Waiting for block devices as requested 00:33:53.896 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:53.896 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:53.896 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:53.896 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:53.896 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:53.896 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:53.896 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:53.896 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:54.157 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:54.157 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:54.157 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:54.418 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:54.418 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:54.418 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:54.679 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:54.679 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:54.679 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:55.623 No valid GPT data, bailing 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.1 -t tcp -s 4420 00:33:55.623 00:33:55.623 Discovery Log Number of Records 2, Generation counter 2 00:33:55.623 =====Discovery Log Entry 0====== 00:33:55.623 trtype: tcp 00:33:55.623 adrfam: ipv4 00:33:55.623 subtype: current discovery subsystem 00:33:55.623 treq: not specified, sq flow control disable supported 00:33:55.623 portid: 1 00:33:55.623 trsvcid: 4420 00:33:55.623 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:55.623 traddr: 10.0.0.1 00:33:55.623 eflags: none 00:33:55.623 sectype: none 00:33:55.623 =====Discovery Log Entry 1====== 00:33:55.623 trtype: tcp 00:33:55.623 adrfam: ipv4 00:33:55.623 subtype: nvme subsystem 00:33:55.623 treq: not specified, sq flow control disable supported 00:33:55.623 portid: 1 00:33:55.623 trsvcid: 4420 00:33:55.623 subnqn: nqn.2024-02.io.spdk:cnode0 00:33:55.623 traddr: 10.0.0.1 00:33:55.623 eflags: none 00:33:55.623 sectype: none 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:55.623 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: ]] 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.884 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.144 nvme0n1 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: ]] 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:33:56.144 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.145 nvme0n1 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.145 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: ]] 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.405 nvme0n1 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.405 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: ]] 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.666 09:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.666 nvme0n1 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: ]] 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.666 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.927 nvme0n1 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.927 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.188 nvme0n1 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:57.188 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: ]] 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.449 09:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.709 nvme0n1 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: ]] 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.709 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.970 nvme0n1 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: ]] 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.970 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.232 nvme0n1 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: ]] 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.232 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.493 nvme0n1 00:33:58.493 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.493 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.493 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.493 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.493 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.494 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.494 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.494 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.494 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.494 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.494 09:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.494 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.494 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:33:58.494 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.494 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:58.494 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:58.494 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:58.494 09:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.494 09:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.758 nvme0n1 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:58.758 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: ]] 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.329 09:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.590 nvme0n1 00:33:59.590 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.590 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:59.590 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:59.590 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.590 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.590 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: ]] 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.851 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.112 nvme0n1 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: ]] 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.112 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.382 nvme0n1 00:34:00.382 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: ]] 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.383 09:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.648 nvme0n1 00:34:00.648 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.648 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:00.648 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.648 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:00.648 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.907 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.907 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.908 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.167 nvme0n1 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:01.167 09:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: ]] 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.078 09:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.338 nvme0n1 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: ]] 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.338 09:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.909 nvme0n1 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: ]] 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.909 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.479 nvme0n1 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: ]] 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.479 09:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.048 nvme0n1 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.048 09:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.619 nvme0n1 00:34:05.619 09:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.619 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.619 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.619 09:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.619 09:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.619 09:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.619 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.619 09:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.619 09:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.619 09:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: ]] 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.619 09:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.565 nvme0n1 00:34:06.565 09:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.565 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.565 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.565 09:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.565 09:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.565 09:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: ]] 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.566 09:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.142 nvme0n1 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: ]] 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.142 09:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.402 09:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.402 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.402 09:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.402 09:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.402 09:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.402 09:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.402 09:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.402 09:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:07.402 09:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.402 09:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:07.402 09:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:07.402 09:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:07.402 09:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:07.402 09:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.402 09:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.971 nvme0n1 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: ]] 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.971 09:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.231 09:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.231 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.231 09:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:08.231 09:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:08.231 09:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:08.231 09:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.231 09:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.231 09:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:08.231 09:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.231 09:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:08.231 09:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:08.231 09:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:08.231 09:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:08.231 09:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.231 09:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.802 nvme0n1 00:34:08.802 09:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.802 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.802 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.802 09:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.802 09:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.802 09:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.802 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.802 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.802 09:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.802 09:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.802 09:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.062 09:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.635 nvme0n1 00:34:09.635 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.635 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.635 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.635 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.635 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.635 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.635 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.635 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.635 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.635 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: ]] 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:09.895 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.896 nvme0n1 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: ]] 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:09.896 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.156 nvme0n1 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: ]] 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.156 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.157 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.157 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.157 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.157 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.157 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.157 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.157 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:10.157 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.157 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:10.157 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:10.157 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:10.157 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:10.157 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.157 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.416 nvme0n1 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: ]] 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.416 09:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.676 nvme0n1 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.676 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.937 nvme0n1 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: ]] 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.937 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.198 nvme0n1 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: ]] 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.198 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.460 nvme0n1 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: ]] 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.460 09:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.720 nvme0n1 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: ]] 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.720 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.981 nvme0n1 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.981 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.242 nvme0n1 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: ]] 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:12.242 09:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:12.243 09:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:12.243 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.243 09:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.503 nvme0n1 00:34:12.504 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.504 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.504 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.504 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.504 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.504 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.504 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.504 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.504 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.504 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: ]] 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:12.764 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.765 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.025 nvme0n1 00:34:13.025 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.025 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.025 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.025 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.025 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.025 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.025 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.025 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.025 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.025 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.025 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.025 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: ]] 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.026 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.287 nvme0n1 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: ]] 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.287 09:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.548 nvme0n1 00:34:13.548 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.548 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.548 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.548 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.548 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.548 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.810 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.071 nvme0n1 00:34:14.071 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.071 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.071 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.071 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.071 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.071 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.071 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.071 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.071 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.071 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.071 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: ]] 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.072 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.644 nvme0n1 00:34:14.644 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.644 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.644 09:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.644 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.644 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.644 09:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: ]] 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.644 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.216 nvme0n1 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: ]] 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.216 09:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.788 nvme0n1 00:34:15.788 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.788 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.788 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: ]] 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.789 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.362 nvme0n1 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.362 09:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.623 nvme0n1 00:34:16.623 09:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.623 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.623 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.623 09:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.623 09:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.883 09:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.883 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.883 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.883 09:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.883 09:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.883 09:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.883 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:16.883 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.883 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:16.883 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.883 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.883 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:16.883 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:16.883 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: ]] 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.884 09:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.455 nvme0n1 00:34:17.455 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.455 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.455 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.455 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.455 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: ]] 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.716 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.287 nvme0n1 00:34:18.287 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.287 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: ]] 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.548 09:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.119 nvme0n1 00:34:19.119 09:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: ]] 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:19.380 09:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:19.381 09:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.381 09:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.381 09:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:19.381 09:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.381 09:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:19.381 09:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:19.381 09:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:19.381 09:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:19.381 09:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.381 09:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.337 nvme0n1 00:34:20.337 09:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.337 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.337 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.337 09:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.338 09:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.910 nvme0n1 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: ]] 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:20.910 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:20.911 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:20.911 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.911 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.172 nvme0n1 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: ]] 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.172 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.173 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:21.173 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.173 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.434 nvme0n1 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: ]] 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.434 09:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.695 nvme0n1 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: ]] 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.695 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.956 nvme0n1 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.956 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.218 nvme0n1 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: ]] 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.218 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.479 nvme0n1 00:34:22.479 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.479 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.479 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.479 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: ]] 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.480 09:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.741 nvme0n1 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:22.741 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: ]] 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.742 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.003 nvme0n1 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:23.003 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: ]] 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.004 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.266 nvme0n1 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.266 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.527 nvme0n1 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: ]] 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.527 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.528 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.528 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.528 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.528 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.528 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.528 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.528 09:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.528 09:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:23.528 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.528 09:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.789 nvme0n1 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: ]] 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.789 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.050 nvme0n1 00:34:24.050 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.050 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.050 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.050 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.050 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: ]] 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.311 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.572 nvme0n1 00:34:24.572 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.572 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.572 09:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.572 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.572 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.572 09:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: ]] 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.572 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.833 nvme0n1 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.833 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.095 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.095 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.095 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.095 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.095 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.095 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.095 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.095 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:25.095 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.095 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:25.095 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:25.095 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:25.095 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:25.095 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.095 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.357 nvme0n1 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: ]] 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.357 09:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.930 nvme0n1 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: ]] 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.930 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.504 nvme0n1 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: ]] 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.504 09:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.766 nvme0n1 00:34:26.766 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.766 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.766 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.766 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.766 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.026 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.026 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.026 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.026 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.026 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.026 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: ]] 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.027 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.599 nvme0n1 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.599 09:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.862 nvme0n1 00:34:27.862 09:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.862 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.862 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.862 09:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.862 09:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDZlNGU0YWMxMTI5NjY2ZmFkMmIwNzljYjJlYWRkNWWq5lCr: 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: ]] 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTQwNGMwZjk5NTQyZTRiZjFjOWNmMzU0M2FlNmM0MWRkOGM4NDY2NTM2MmM4ZmY0NGUzNjU4MmM5OGQ3NDEyOd/0CKY=: 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:28.123 09:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:28.124 09:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:28.124 09:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.124 09:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.124 09:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:28.124 09:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.124 09:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:28.124 09:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:28.124 09:47:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:28.124 09:47:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:28.124 09:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.124 09:47:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.697 nvme0n1 00:34:28.697 09:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.697 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.697 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.697 09:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.697 09:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: ]] 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.957 09:47:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.529 nvme0n1 00:34:29.529 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.529 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.529 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.529 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.529 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVjZmEwNGRiMDEyNDdmY2RiOTY1N2Q1ODZlYWNhMmKdV9CE: 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: ]] 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2RjMjNjNjJkMjFhZjZjMWY0MzE2ZDllNTYxNmUzNzeFlPh3: 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.790 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.364 nvme0n1 00:34:30.364 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.364 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.364 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.364 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.364 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.625 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzEyOGVjNzNiODg1MDE3ODVmMWE3NDg2MjMwNjAzOWFiODE2MTJlZmUyOTdkNGUzgjhfxw==: 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: ]] 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDE3YTlhYTdiY2Q1YzY5MDc1NTk0ODE4ZDJlNGI1ZTQrhiwX: 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.626 09:47:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.198 nvme0n1 00:34:31.198 09:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.198 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.198 09:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.198 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.198 09:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.459 09:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.459 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.459 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.459 09:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.459 09:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWY3MGE5ZGNkY2RiYzI0YWE0OGYyNjlkNzdlNTM3YzkwNDc4MjU3NmM1ZmU5MDJjY2FiYTJjY2VlNzdhNzUwOOno4dA=: 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.460 09:47:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.031 nvme0n1 00:34:32.031 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.031 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.031 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.031 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.031 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZThhYTMwNjNhNGYxYTZmZGZhYmM3NGRlYzNjZGFhYjllNjcwOTJmNjBkMTRkMTZmcraTRQ==: 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: ]] 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDAyMDFmNjE5Yjc1ZjQ2NDlhZTI5YTE1MWZjOTczZDIwYWMwYjc1NzczMzhjM2NlAwSrvw==: 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.292 request: 00:34:32.292 { 00:34:32.292 "name": "nvme0", 00:34:32.292 "trtype": "tcp", 00:34:32.292 "traddr": "10.0.0.1", 00:34:32.292 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:32.292 "adrfam": "ipv4", 00:34:32.292 "trsvcid": "4420", 00:34:32.292 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:32.292 "method": "bdev_nvme_attach_controller", 00:34:32.292 "req_id": 1 00:34:32.292 } 00:34:32.292 Got JSON-RPC error response 00:34:32.292 response: 00:34:32.292 { 00:34:32.292 "code": -32602, 00:34:32.292 "message": "Invalid parameters" 00:34:32.292 } 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:32.292 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.293 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.554 request: 00:34:32.554 { 00:34:32.554 "name": "nvme0", 00:34:32.554 "trtype": "tcp", 00:34:32.554 "traddr": "10.0.0.1", 00:34:32.554 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:32.554 "adrfam": "ipv4", 00:34:32.554 "trsvcid": "4420", 00:34:32.554 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:32.554 "dhchap_key": "key2", 00:34:32.554 "method": "bdev_nvme_attach_controller", 00:34:32.554 "req_id": 1 00:34:32.554 } 00:34:32.554 Got JSON-RPC error response 00:34:32.554 response: 00:34:32.554 { 00:34:32.554 "code": -32602, 00:34:32.554 "message": "Invalid parameters" 00:34:32.554 } 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.554 request: 00:34:32.554 { 00:34:32.554 "name": "nvme0", 00:34:32.554 "trtype": "tcp", 00:34:32.554 "traddr": "10.0.0.1", 00:34:32.554 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:32.554 "adrfam": "ipv4", 00:34:32.554 "trsvcid": "4420", 00:34:32.554 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:32.554 "dhchap_key": "key1", 00:34:32.554 "dhchap_ctrlr_key": "ckey2", 00:34:32.554 "method": "bdev_nvme_attach_controller", 00:34:32.554 "req_id": 1 00:34:32.554 } 00:34:32.554 Got JSON-RPC error response 00:34:32.554 response: 00:34:32.554 { 00:34:32.554 "code": -32602, 00:34:32.554 "message": "Invalid parameters" 00:34:32.554 } 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:32.554 09:47:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:32.554 rmmod nvme_tcp 00:34:32.554 rmmod nvme_fabrics 00:34:32.554 09:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:32.554 09:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:34:32.554 09:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:34:32.554 09:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 502769 ']' 00:34:32.554 09:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 502769 00:34:32.554 09:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 502769 ']' 00:34:32.554 09:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 502769 00:34:32.554 09:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:34:32.554 09:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:32.554 09:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 502769 00:34:32.554 09:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:32.815 09:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:32.815 09:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 502769' 00:34:32.815 killing process with pid 502769 00:34:32.815 09:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 502769 00:34:32.815 09:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 502769 00:34:32.815 09:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:32.815 09:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:32.815 09:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:32.815 09:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:32.815 09:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:32.815 09:47:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.815 09:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:32.815 09:47:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.364 09:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:35.364 09:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:35.364 09:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:35.364 09:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:35.364 09:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:35.364 09:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:34:35.364 09:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:35.364 09:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:35.364 09:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:35.364 09:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:35.364 09:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:35.364 09:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:34:35.364 09:47:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:38.668 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:38.668 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:38.668 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:38.668 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:38.668 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:38.668 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:38.668 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:38.668 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:38.668 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:38.668 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:38.668 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:38.668 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:38.668 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:38.668 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:38.668 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:38.668 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:38.668 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:34:38.669 09:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.JyS /tmp/spdk.key-null.xap /tmp/spdk.key-sha256.5gU /tmp/spdk.key-sha384.MOI /tmp/spdk.key-sha512.51z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:38.669 09:47:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:41.966 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:41.966 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:41.966 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:41.966 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:41.966 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:41.966 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:41.966 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:41.966 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:41.966 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:41.966 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:34:41.966 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:41.966 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:41.966 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:41.966 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:41.966 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:41.966 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:41.966 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:42.227 00:34:42.227 real 1m0.661s 00:34:42.227 user 0m54.578s 00:34:42.227 sys 0m14.583s 00:34:42.227 09:47:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:42.227 09:47:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.227 ************************************ 00:34:42.227 END TEST nvmf_auth_host 00:34:42.227 ************************************ 00:34:42.227 09:47:35 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:34:42.227 09:47:35 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:42.227 09:47:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:42.227 09:47:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:42.227 09:47:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.488 ************************************ 00:34:42.488 START TEST nvmf_digest 00:34:42.488 ************************************ 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:42.488 * Looking for test storage... 00:34:42.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:42.488 09:47:35 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:34:42.489 09:47:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:50.626 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:50.626 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:50.626 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:50.627 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:50.627 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:50.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:50.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:34:50.627 00:34:50.627 --- 10.0.0.2 ping statistics --- 00:34:50.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.627 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:34:50.627 09:47:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:50.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:50.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:34:50.627 00:34:50.627 --- 10.0.0.1 ping statistics --- 00:34:50.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.627 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:50.627 ************************************ 00:34:50.627 START TEST nvmf_digest_clean 00:34:50.627 ************************************ 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=519853 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 519853 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 519853 ']' 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:50.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:50.627 [2024-05-16 09:47:43.138784] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:34:50.627 [2024-05-16 09:47:43.138847] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:50.627 EAL: No free 2048 kB hugepages reported on node 1 00:34:50.627 [2024-05-16 09:47:43.209469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.627 [2024-05-16 09:47:43.282925] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:50.627 [2024-05-16 09:47:43.282964] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:50.627 [2024-05-16 09:47:43.282972] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:50.627 [2024-05-16 09:47:43.282979] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:50.627 [2024-05-16 09:47:43.282984] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:50.627 [2024-05-16 09:47:43.283003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:50.627 09:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:50.627 null0 00:34:50.627 [2024-05-16 09:47:44.013789] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:50.627 [2024-05-16 09:47:44.037765] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:50.627 [2024-05-16 09:47:44.037977] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:50.627 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:50.627 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:50.627 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:50.627 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:50.627 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:50.627 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:50.627 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:50.627 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:50.627 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=520076 00:34:50.627 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 520076 /var/tmp/bperf.sock 00:34:50.627 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 520076 ']' 00:34:50.627 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:50.627 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:50.627 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:50.627 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:50.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:50.627 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:50.627 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:50.627 [2024-05-16 09:47:44.090277] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:34:50.627 [2024-05-16 09:47:44.090326] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520076 ] 00:34:50.627 EAL: No free 2048 kB hugepages reported on node 1 00:34:50.627 [2024-05-16 09:47:44.166848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.887 [2024-05-16 09:47:44.231018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:51.456 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:51.456 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:34:51.456 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:51.456 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:51.456 09:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:51.716 09:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:51.716 09:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:51.976 nvme0n1 00:34:51.976 09:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:51.976 09:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:51.976 Running I/O for 2 seconds... 00:34:54.519 00:34:54.519 Latency(us) 00:34:54.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.519 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:54.519 nvme0n1 : 2.04 19506.02 76.20 0.00 0.00 6424.79 2853.55 43909.12 00:34:54.519 =================================================================================================================== 00:34:54.519 Total : 19506.02 76.20 0.00 0.00 6424.79 2853.55 43909.12 00:34:54.519 0 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:54.519 | select(.opcode=="crc32c") 00:34:54.519 | "\(.module_name) \(.executed)"' 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 520076 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 520076 ']' 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 520076 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 520076 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 520076' 00:34:54.519 killing process with pid 520076 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 520076 00:34:54.519 Received shutdown signal, test time was about 2.000000 seconds 00:34:54.519 00:34:54.519 Latency(us) 00:34:54.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.519 =================================================================================================================== 00:34:54.519 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 520076 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=520763 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 520763 /var/tmp/bperf.sock 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 520763 ']' 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:54.519 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:54.520 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:54.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:54.520 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:54.520 09:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:54.520 [2024-05-16 09:47:47.952308] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:34:54.520 [2024-05-16 09:47:47.952365] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520763 ] 00:34:54.520 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:54.520 Zero copy mechanism will not be used. 00:34:54.520 EAL: No free 2048 kB hugepages reported on node 1 00:34:54.520 [2024-05-16 09:47:48.028701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.781 [2024-05-16 09:47:48.091976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.353 09:47:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:55.353 09:47:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:34:55.353 09:47:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:55.353 09:47:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:55.353 09:47:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:55.353 09:47:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:55.353 09:47:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:55.613 nvme0n1 00:34:55.613 09:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:55.613 09:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:55.874 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:55.874 Zero copy mechanism will not be used. 00:34:55.874 Running I/O for 2 seconds... 00:34:57.787 00:34:57.787 Latency(us) 00:34:57.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.787 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:57.787 nvme0n1 : 2.00 3686.30 460.79 0.00 0.00 4337.11 744.11 9229.65 00:34:57.787 =================================================================================================================== 00:34:57.787 Total : 3686.30 460.79 0.00 0.00 4337.11 744.11 9229.65 00:34:57.787 0 00:34:57.787 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:57.787 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:57.787 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:57.787 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:57.787 | select(.opcode=="crc32c") 00:34:57.787 | "\(.module_name) \(.executed)"' 00:34:57.787 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:58.047 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:58.047 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:58.047 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:58.047 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:58.047 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 520763 00:34:58.047 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 520763 ']' 00:34:58.047 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 520763 00:34:58.047 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:34:58.047 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:58.047 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 520763 00:34:58.047 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:58.047 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:58.047 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 520763' 00:34:58.047 killing process with pid 520763 00:34:58.047 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 520763 00:34:58.047 Received shutdown signal, test time was about 2.000000 seconds 00:34:58.047 00:34:58.047 Latency(us) 00:34:58.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.047 =================================================================================================================== 00:34:58.047 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:58.047 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 520763 00:34:58.047 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:58.047 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:58.047 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:58.048 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:58.048 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:58.048 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:58.048 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:58.048 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=521457 00:34:58.048 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 521457 /var/tmp/bperf.sock 00:34:58.048 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 521457 ']' 00:34:58.048 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:58.048 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:58.048 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:58.048 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:58.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:58.048 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:58.048 09:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:58.308 [2024-05-16 09:47:51.635611] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:34:58.308 [2024-05-16 09:47:51.635678] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521457 ] 00:34:58.308 EAL: No free 2048 kB hugepages reported on node 1 00:34:58.308 [2024-05-16 09:47:51.709392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.308 [2024-05-16 09:47:51.762889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:58.879 09:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:58.879 09:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:34:58.879 09:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:58.879 09:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:58.879 09:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:59.139 09:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:59.139 09:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:59.400 nvme0n1 00:34:59.400 09:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:59.400 09:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:59.660 Running I/O for 2 seconds... 00:35:01.571 00:35:01.571 Latency(us) 00:35:01.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.571 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:01.571 nvme0n1 : 2.00 22045.66 86.12 0.00 0.00 5797.90 2143.57 13598.72 00:35:01.571 =================================================================================================================== 00:35:01.571 Total : 22045.66 86.12 0.00 0.00 5797.90 2143.57 13598.72 00:35:01.571 0 00:35:01.571 09:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:01.571 09:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:01.571 09:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:01.571 09:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:01.571 | select(.opcode=="crc32c") 00:35:01.571 | "\(.module_name) \(.executed)"' 00:35:01.571 09:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 521457 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 521457 ']' 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 521457 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 521457 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 521457' 00:35:01.832 killing process with pid 521457 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 521457 00:35:01.832 Received shutdown signal, test time was about 2.000000 seconds 00:35:01.832 00:35:01.832 Latency(us) 00:35:01.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.832 =================================================================================================================== 00:35:01.832 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 521457 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=522256 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 522256 /var/tmp/bperf.sock 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 522256 ']' 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:01.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:01.832 09:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:01.832 [2024-05-16 09:47:55.369390] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:35:01.832 [2024-05-16 09:47:55.369443] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522256 ] 00:35:01.832 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:01.832 Zero copy mechanism will not be used. 00:35:02.092 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.092 [2024-05-16 09:47:55.444226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.092 [2024-05-16 09:47:55.498005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.664 09:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:02.664 09:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:35:02.664 09:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:02.664 09:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:02.664 09:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:02.924 09:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:02.924 09:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:03.184 nvme0n1 00:35:03.184 09:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:03.184 09:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:03.444 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:03.444 Zero copy mechanism will not be used. 00:35:03.444 Running I/O for 2 seconds... 00:35:05.357 00:35:05.357 Latency(us) 00:35:05.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.357 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:05.357 nvme0n1 : 2.00 4634.41 579.30 0.00 0.00 3446.88 1672.53 10922.67 00:35:05.357 =================================================================================================================== 00:35:05.357 Total : 4634.41 579.30 0.00 0.00 3446.88 1672.53 10922.67 00:35:05.357 0 00:35:05.357 09:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:05.357 09:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:05.357 09:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:05.357 09:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:05.357 | select(.opcode=="crc32c") 00:35:05.357 | "\(.module_name) \(.executed)"' 00:35:05.357 09:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:05.617 09:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:05.617 09:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:05.617 09:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:05.617 09:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:05.617 09:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 522256 00:35:05.618 09:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 522256 ']' 00:35:05.618 09:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 522256 00:35:05.618 09:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:35:05.618 09:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:05.618 09:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 522256 00:35:05.618 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:05.618 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:05.618 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 522256' 00:35:05.618 killing process with pid 522256 00:35:05.618 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 522256 00:35:05.618 Received shutdown signal, test time was about 2.000000 seconds 00:35:05.618 00:35:05.618 Latency(us) 00:35:05.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.618 =================================================================================================================== 00:35:05.618 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:05.618 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 522256 00:35:05.618 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 519853 00:35:05.618 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 519853 ']' 00:35:05.618 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 519853 00:35:05.618 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:35:05.618 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:05.618 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 519853 00:35:05.878 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:05.878 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 519853' 00:35:05.879 killing process with pid 519853 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 519853 00:35:05.879 [2024-05-16 09:47:59.181786] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 519853 00:35:05.879 00:35:05.879 real 0m16.236s 00:35:05.879 user 0m31.837s 00:35:05.879 sys 0m3.384s 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:05.879 ************************************ 00:35:05.879 END TEST nvmf_digest_clean 00:35:05.879 ************************************ 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:05.879 ************************************ 00:35:05.879 START TEST nvmf_digest_error 00:35:05.879 ************************************ 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=523173 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 523173 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 523173 ']' 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:05.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:05.879 09:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:06.139 [2024-05-16 09:47:59.451774] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:35:06.139 [2024-05-16 09:47:59.451818] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:06.139 EAL: No free 2048 kB hugepages reported on node 1 00:35:06.139 [2024-05-16 09:47:59.534021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.139 [2024-05-16 09:47:59.605435] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:06.139 [2024-05-16 09:47:59.605482] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:06.139 [2024-05-16 09:47:59.605492] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:06.139 [2024-05-16 09:47:59.605499] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:06.139 [2024-05-16 09:47:59.605505] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:06.139 [2024-05-16 09:47:59.605529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:07.081 [2024-05-16 09:48:00.335728] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:07.081 null0 00:35:07.081 [2024-05-16 09:48:00.416682] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:07.081 [2024-05-16 09:48:00.440674] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:07.081 [2024-05-16 09:48:00.440905] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:07.081 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:07.082 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=523291 00:35:07.082 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 523291 /var/tmp/bperf.sock 00:35:07.082 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 523291 ']' 00:35:07.082 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:07.082 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:07.082 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:07.082 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:07.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:07.082 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:07.082 09:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:07.082 [2024-05-16 09:48:00.494551] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:35:07.082 [2024-05-16 09:48:00.494602] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523291 ] 00:35:07.082 EAL: No free 2048 kB hugepages reported on node 1 00:35:07.082 [2024-05-16 09:48:00.568346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:07.082 [2024-05-16 09:48:00.622356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.024 09:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:08.024 09:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:35:08.024 09:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:08.024 09:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:08.024 09:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:08.024 09:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.024 09:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.024 09:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.024 09:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:08.024 09:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:08.283 nvme0n1 00:35:08.283 09:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:08.283 09:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.283 09:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.283 09:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.283 09:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:08.283 09:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:08.283 Running I/O for 2 seconds... 00:35:08.283 [2024-05-16 09:48:01.821084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.283 [2024-05-16 09:48:01.821115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.283 [2024-05-16 09:48:01.821123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.283 [2024-05-16 09:48:01.831041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.283 [2024-05-16 09:48:01.831066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.283 [2024-05-16 09:48:01.831073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:01.844405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:01.844425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:01.844432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:01.857435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:01.857453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:01.857460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:01.870828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:01.870847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:01.870854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:01.881112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:01.881130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:01.881136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:01.893754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:01.893772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:01.893779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:01.906921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:01.906939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:01.906946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:01.919604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:01.919621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:01.919628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:01.930940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:01.930957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:01.930964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:01.942200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:01.942217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:01.942227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:01.955528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:01.955546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:01.955552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:01.967758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:01.967775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:01.967781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:01.978385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:01.978402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:01.978409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:01.990736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:01.990754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:01.990760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:02.004081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:02.004099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:02.004106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:02.017595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:02.017613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:02.017619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:02.030530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:02.030547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:02.030554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:02.042160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:02.042177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:02.042184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:02.053772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:02.053793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:02.053800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:02.066621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:02.066639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:02.066645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:02.079421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:02.079440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:02.079447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:02.091919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:02.091937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:02.091943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.545 [2024-05-16 09:48:02.102925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.545 [2024-05-16 09:48:02.102942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.545 [2024-05-16 09:48:02.102949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.807 [2024-05-16 09:48:02.115928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.807 [2024-05-16 09:48:02.115946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.807 [2024-05-16 09:48:02.115953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.807 [2024-05-16 09:48:02.128934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.807 [2024-05-16 09:48:02.128952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.807 [2024-05-16 09:48:02.128958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.807 [2024-05-16 09:48:02.141276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.807 [2024-05-16 09:48:02.141294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.807 [2024-05-16 09:48:02.141301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.807 [2024-05-16 09:48:02.153320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.807 [2024-05-16 09:48:02.153338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.807 [2024-05-16 09:48:02.153344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.807 [2024-05-16 09:48:02.164981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.807 [2024-05-16 09:48:02.164999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.807 [2024-05-16 09:48:02.165005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.807 [2024-05-16 09:48:02.178522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.807 [2024-05-16 09:48:02.178539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.807 [2024-05-16 09:48:02.178547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.807 [2024-05-16 09:48:02.189454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.807 [2024-05-16 09:48:02.189472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.807 [2024-05-16 09:48:02.189479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.807 [2024-05-16 09:48:02.201596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.807 [2024-05-16 09:48:02.201614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.807 [2024-05-16 09:48:02.201621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.807 [2024-05-16 09:48:02.213276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.807 [2024-05-16 09:48:02.213294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.807 [2024-05-16 09:48:02.213300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.807 [2024-05-16 09:48:02.225739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.807 [2024-05-16 09:48:02.225757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.807 [2024-05-16 09:48:02.225764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.807 [2024-05-16 09:48:02.238767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.807 [2024-05-16 09:48:02.238785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.807 [2024-05-16 09:48:02.238791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.807 [2024-05-16 09:48:02.253090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.807 [2024-05-16 09:48:02.253108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.807 [2024-05-16 09:48:02.253114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.807 [2024-05-16 09:48:02.267012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.808 [2024-05-16 09:48:02.267032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.808 [2024-05-16 09:48:02.267039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.808 [2024-05-16 09:48:02.280558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.808 [2024-05-16 09:48:02.280576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.808 [2024-05-16 09:48:02.280582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.808 [2024-05-16 09:48:02.292310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.808 [2024-05-16 09:48:02.292327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.808 [2024-05-16 09:48:02.292333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.808 [2024-05-16 09:48:02.305531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.808 [2024-05-16 09:48:02.305549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.808 [2024-05-16 09:48:02.305555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.808 [2024-05-16 09:48:02.317473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.808 [2024-05-16 09:48:02.317490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.808 [2024-05-16 09:48:02.317497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.808 [2024-05-16 09:48:02.329374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.808 [2024-05-16 09:48:02.329391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.808 [2024-05-16 09:48:02.329398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.808 [2024-05-16 09:48:02.343536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.808 [2024-05-16 09:48:02.343553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.808 [2024-05-16 09:48:02.343560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.808 [2024-05-16 09:48:02.353322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:08.808 [2024-05-16 09:48:02.353340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.808 [2024-05-16 09:48:02.353346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.070 [2024-05-16 09:48:02.367252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.070 [2024-05-16 09:48:02.367270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.070 [2024-05-16 09:48:02.367277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.070 [2024-05-16 09:48:02.379384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.070 [2024-05-16 09:48:02.379401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.070 [2024-05-16 09:48:02.379408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.070 [2024-05-16 09:48:02.392826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.070 [2024-05-16 09:48:02.392844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.070 [2024-05-16 09:48:02.392850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.070 [2024-05-16 09:48:02.404985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.070 [2024-05-16 09:48:02.405002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.070 [2024-05-16 09:48:02.405008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.070 [2024-05-16 09:48:02.415082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.070 [2024-05-16 09:48:02.415099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.070 [2024-05-16 09:48:02.415106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.070 [2024-05-16 09:48:02.428178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.070 [2024-05-16 09:48:02.428196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.070 [2024-05-16 09:48:02.428203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.070 [2024-05-16 09:48:02.441118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.070 [2024-05-16 09:48:02.441136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.070 [2024-05-16 09:48:02.441143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.070 [2024-05-16 09:48:02.453691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.070 [2024-05-16 09:48:02.453708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.070 [2024-05-16 09:48:02.453715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.070 [2024-05-16 09:48:02.466005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.070 [2024-05-16 09:48:02.466024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.070 [2024-05-16 09:48:02.466030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.070 [2024-05-16 09:48:02.477654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.070 [2024-05-16 09:48:02.477671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.070 [2024-05-16 09:48:02.477681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.070 [2024-05-16 09:48:02.491347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.070 [2024-05-16 09:48:02.491364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.070 [2024-05-16 09:48:02.491371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.070 [2024-05-16 09:48:02.503699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.070 [2024-05-16 09:48:02.503716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.070 [2024-05-16 09:48:02.503724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.070 [2024-05-16 09:48:02.513854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.070 [2024-05-16 09:48:02.513872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.071 [2024-05-16 09:48:02.513878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.071 [2024-05-16 09:48:02.528084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.071 [2024-05-16 09:48:02.528101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.071 [2024-05-16 09:48:02.528107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.071 [2024-05-16 09:48:02.540230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.071 [2024-05-16 09:48:02.540248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.071 [2024-05-16 09:48:02.540254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.071 [2024-05-16 09:48:02.552020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.071 [2024-05-16 09:48:02.552037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.071 [2024-05-16 09:48:02.552044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.071 [2024-05-16 09:48:02.564880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.071 [2024-05-16 09:48:02.564899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.071 [2024-05-16 09:48:02.564906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.071 [2024-05-16 09:48:02.578037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.071 [2024-05-16 09:48:02.578057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.071 [2024-05-16 09:48:02.578064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.071 [2024-05-16 09:48:02.590192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.071 [2024-05-16 09:48:02.590212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.071 [2024-05-16 09:48:02.590219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.071 [2024-05-16 09:48:02.601555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.071 [2024-05-16 09:48:02.601572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.071 [2024-05-16 09:48:02.601578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.071 [2024-05-16 09:48:02.613835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.071 [2024-05-16 09:48:02.613852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.071 [2024-05-16 09:48:02.613859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.071 [2024-05-16 09:48:02.627097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.071 [2024-05-16 09:48:02.627115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.071 [2024-05-16 09:48:02.627121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.331 [2024-05-16 09:48:02.638369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.638387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.638395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.649818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.649835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.649842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.661638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.661656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.661662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.674527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.674544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.674550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.687430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.687447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.687454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.700518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.700536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.700542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.714188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.714206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.714213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.726875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.726893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.726899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.736840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.736858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.736864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.750570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.750587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.750594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.763654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.763671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.763679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.775371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.775388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.775394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.788397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.788414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.788420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.799741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.799761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.799768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.812993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.813010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.813017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.824152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.824169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.824176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.837672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.837689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.837696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.849960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.849977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.849983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.860277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.860294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.860301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.873423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.873440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.873447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.332 [2024-05-16 09:48:02.887190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.332 [2024-05-16 09:48:02.887208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.332 [2024-05-16 09:48:02.887214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.594 [2024-05-16 09:48:02.899067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.594 [2024-05-16 09:48:02.899085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.594 [2024-05-16 09:48:02.899091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.594 [2024-05-16 09:48:02.911272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.594 [2024-05-16 09:48:02.911290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.594 [2024-05-16 09:48:02.911297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.594 [2024-05-16 09:48:02.923719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.594 [2024-05-16 09:48:02.923736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:02.923743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.595 [2024-05-16 09:48:02.935993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.595 [2024-05-16 09:48:02.936010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:02.936016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.595 [2024-05-16 09:48:02.949011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.595 [2024-05-16 09:48:02.949028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:02.949034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.595 [2024-05-16 09:48:02.961324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.595 [2024-05-16 09:48:02.961342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:02.961348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.595 [2024-05-16 09:48:02.973326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.595 [2024-05-16 09:48:02.973344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:02.973350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.595 [2024-05-16 09:48:02.985957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.595 [2024-05-16 09:48:02.985974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:02.985980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.595 [2024-05-16 09:48:02.996396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.595 [2024-05-16 09:48:02.996412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:02.996419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.595 [2024-05-16 09:48:03.009362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.595 [2024-05-16 09:48:03.009380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:03.009389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.595 [2024-05-16 09:48:03.022266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.595 [2024-05-16 09:48:03.022283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:03.022290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.595 [2024-05-16 09:48:03.034195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.595 [2024-05-16 09:48:03.034212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:03.034218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.595 [2024-05-16 09:48:03.048302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.595 [2024-05-16 09:48:03.048319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:03.048326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.595 [2024-05-16 09:48:03.058050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.595 [2024-05-16 09:48:03.058072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:03.058078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.595 [2024-05-16 09:48:03.072276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.595 [2024-05-16 09:48:03.072292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:03.072298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.595 [2024-05-16 09:48:03.084023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.595 [2024-05-16 09:48:03.084040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:03.084047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.595 [2024-05-16 09:48:03.095946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.595 [2024-05-16 09:48:03.095964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:03.095970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.595 [2024-05-16 09:48:03.108610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.595 [2024-05-16 09:48:03.108627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:03.108633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.595 [2024-05-16 09:48:03.121111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.595 [2024-05-16 09:48:03.121130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:03.121137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.595 [2024-05-16 09:48:03.132349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.595 [2024-05-16 09:48:03.132366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:03.132373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.595 [2024-05-16 09:48:03.145380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.595 [2024-05-16 09:48:03.145398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.595 [2024-05-16 09:48:03.145405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.158518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.158535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.158542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.170227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.170244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.170250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.182222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.182240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.182246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.194700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.194718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.194724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.207382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.207400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.207407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.219724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.219741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.219748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.231985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.232002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.232009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.244921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.244937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.244944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.258281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.258298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.258305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.267728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.267745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.267752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.281394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.281411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.281417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.294088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.294106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.294112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.307143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.307160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.307166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.319953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.319971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.319977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.330585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.330603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.330614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.343176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.343193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.343199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.357711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.357728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.357735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.367228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.367245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.367251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.382759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.856 [2024-05-16 09:48:03.382776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.856 [2024-05-16 09:48:03.382783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.856 [2024-05-16 09:48:03.393748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.857 [2024-05-16 09:48:03.393766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.857 [2024-05-16 09:48:03.393773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.857 [2024-05-16 09:48:03.406382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:09.857 [2024-05-16 09:48:03.406400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.857 [2024-05-16 09:48:03.406407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.120 [2024-05-16 09:48:03.420190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.120 [2024-05-16 09:48:03.420207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.120 [2024-05-16 09:48:03.420214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.120 [2024-05-16 09:48:03.432266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.120 [2024-05-16 09:48:03.432283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.120 [2024-05-16 09:48:03.432289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.120 [2024-05-16 09:48:03.445293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.120 [2024-05-16 09:48:03.445311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.120 [2024-05-16 09:48:03.445317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.120 [2024-05-16 09:48:03.457134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.120 [2024-05-16 09:48:03.457151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.120 [2024-05-16 09:48:03.457158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.121 [2024-05-16 09:48:03.467417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.121 [2024-05-16 09:48:03.467435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.121 [2024-05-16 09:48:03.467442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.121 [2024-05-16 09:48:03.481703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.121 [2024-05-16 09:48:03.481720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.121 [2024-05-16 09:48:03.481727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.121 [2024-05-16 09:48:03.494670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.121 [2024-05-16 09:48:03.494688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.121 [2024-05-16 09:48:03.494695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.121 [2024-05-16 09:48:03.505953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.121 [2024-05-16 09:48:03.505969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.121 [2024-05-16 09:48:03.505976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.121 [2024-05-16 09:48:03.518294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.121 [2024-05-16 09:48:03.518312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.121 [2024-05-16 09:48:03.518318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.121 [2024-05-16 09:48:03.531013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.121 [2024-05-16 09:48:03.531030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.121 [2024-05-16 09:48:03.531037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.121 [2024-05-16 09:48:03.544222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.121 [2024-05-16 09:48:03.544240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.121 [2024-05-16 09:48:03.544249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.121 [2024-05-16 09:48:03.555244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.121 [2024-05-16 09:48:03.555261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.121 [2024-05-16 09:48:03.555267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.121 [2024-05-16 09:48:03.568242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.121 [2024-05-16 09:48:03.568259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.121 [2024-05-16 09:48:03.568266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.121 [2024-05-16 09:48:03.578471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.121 [2024-05-16 09:48:03.578488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.121 [2024-05-16 09:48:03.578495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.121 [2024-05-16 09:48:03.592456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.121 [2024-05-16 09:48:03.592473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.121 [2024-05-16 09:48:03.592480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.121 [2024-05-16 09:48:03.604699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.121 [2024-05-16 09:48:03.604716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.121 [2024-05-16 09:48:03.604722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.121 [2024-05-16 09:48:03.617684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.121 [2024-05-16 09:48:03.617701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.121 [2024-05-16 09:48:03.617708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.121 [2024-05-16 09:48:03.629666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.121 [2024-05-16 09:48:03.629683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.121 [2024-05-16 09:48:03.629689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.121 [2024-05-16 09:48:03.640637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.121 [2024-05-16 09:48:03.640653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.121 [2024-05-16 09:48:03.640660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.121 [2024-05-16 09:48:03.654131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.121 [2024-05-16 09:48:03.654151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.121 [2024-05-16 09:48:03.654157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.121 [2024-05-16 09:48:03.667965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.121 [2024-05-16 09:48:03.667983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.121 [2024-05-16 09:48:03.667989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.384 [2024-05-16 09:48:03.681807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.384 [2024-05-16 09:48:03.681824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.384 [2024-05-16 09:48:03.681830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.384 [2024-05-16 09:48:03.691836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.384 [2024-05-16 09:48:03.691853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.384 [2024-05-16 09:48:03.691859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.384 [2024-05-16 09:48:03.705090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.384 [2024-05-16 09:48:03.705108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.384 [2024-05-16 09:48:03.705114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.384 [2024-05-16 09:48:03.717116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.384 [2024-05-16 09:48:03.717133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.384 [2024-05-16 09:48:03.717139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.384 [2024-05-16 09:48:03.729652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.384 [2024-05-16 09:48:03.729669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.384 [2024-05-16 09:48:03.729676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.384 [2024-05-16 09:48:03.742035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.384 [2024-05-16 09:48:03.742057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.384 [2024-05-16 09:48:03.742063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.384 [2024-05-16 09:48:03.754790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.384 [2024-05-16 09:48:03.754807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.384 [2024-05-16 09:48:03.754814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.384 [2024-05-16 09:48:03.766349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.384 [2024-05-16 09:48:03.766366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.384 [2024-05-16 09:48:03.766372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.384 [2024-05-16 09:48:03.780223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.384 [2024-05-16 09:48:03.780241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.384 [2024-05-16 09:48:03.780247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.385 [2024-05-16 09:48:03.792071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.385 [2024-05-16 09:48:03.792088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.385 [2024-05-16 09:48:03.792095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.385 [2024-05-16 09:48:03.804415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x247c470) 00:35:10.385 [2024-05-16 09:48:03.804432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.385 [2024-05-16 09:48:03.804439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.385 00:35:10.385 Latency(us) 00:35:10.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.385 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:10.385 nvme0n1 : 2.00 20553.86 80.29 0.00 0.00 6221.04 2266.45 16056.32 00:35:10.385 =================================================================================================================== 00:35:10.385 Total : 20553.86 80.29 0.00 0.00 6221.04 2266.45 16056.32 00:35:10.385 0 00:35:10.385 09:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:10.385 09:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:10.385 09:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:10.385 | .driver_specific 00:35:10.385 | .nvme_error 00:35:10.385 | .status_code 00:35:10.385 | .command_transient_transport_error' 00:35:10.385 09:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:10.655 09:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 161 > 0 )) 00:35:10.655 09:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 523291 00:35:10.655 09:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 523291 ']' 00:35:10.655 09:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 523291 00:35:10.655 09:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:35:10.655 09:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:10.655 09:48:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 523291 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 523291' 00:35:10.655 killing process with pid 523291 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 523291 00:35:10.655 Received shutdown signal, test time was about 2.000000 seconds 00:35:10.655 00:35:10.655 Latency(us) 00:35:10.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.655 =================================================================================================================== 00:35:10.655 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 523291 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=524148 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 524148 /var/tmp/bperf.sock 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 524148 ']' 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:10.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:10.655 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:10.922 [2024-05-16 09:48:04.216406] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:35:10.922 [2024-05-16 09:48:04.216478] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524148 ] 00:35:10.922 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:10.922 Zero copy mechanism will not be used. 00:35:10.922 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.922 [2024-05-16 09:48:04.292161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:10.922 [2024-05-16 09:48:04.345278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.493 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:11.493 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:35:11.493 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:11.493 09:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:11.754 09:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:11.754 09:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.754 09:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:11.754 09:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.754 09:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:11.754 09:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.015 nvme0n1 00:35:12.015 09:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:12.015 09:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.015 09:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:12.015 09:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.015 09:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:12.015 09:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:12.015 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:12.015 Zero copy mechanism will not be used. 00:35:12.015 Running I/O for 2 seconds... 00:35:12.015 [2024-05-16 09:48:05.472700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.015 [2024-05-16 09:48:05.472730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.015 [2024-05-16 09:48:05.472738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.015 [2024-05-16 09:48:05.481314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.015 [2024-05-16 09:48:05.481336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.015 [2024-05-16 09:48:05.481343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.015 [2024-05-16 09:48:05.491827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.015 [2024-05-16 09:48:05.491845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.015 [2024-05-16 09:48:05.491852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.015 [2024-05-16 09:48:05.501205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.015 [2024-05-16 09:48:05.501223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.015 [2024-05-16 09:48:05.501229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.015 [2024-05-16 09:48:05.511296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.015 [2024-05-16 09:48:05.511314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.015 [2024-05-16 09:48:05.511320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.015 [2024-05-16 09:48:05.521774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.015 [2024-05-16 09:48:05.521796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.015 [2024-05-16 09:48:05.521802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.015 [2024-05-16 09:48:05.528825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.015 [2024-05-16 09:48:05.528842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.015 [2024-05-16 09:48:05.528849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.015 [2024-05-16 09:48:05.535372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.015 [2024-05-16 09:48:05.535390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.015 [2024-05-16 09:48:05.535396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.015 [2024-05-16 09:48:05.545302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.015 [2024-05-16 09:48:05.545320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.015 [2024-05-16 09:48:05.545327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.015 [2024-05-16 09:48:05.557461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.015 [2024-05-16 09:48:05.557478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.015 [2024-05-16 09:48:05.557483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.015 [2024-05-16 09:48:05.566921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.016 [2024-05-16 09:48:05.566940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.016 [2024-05-16 09:48:05.566946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.278 [2024-05-16 09:48:05.577511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.278 [2024-05-16 09:48:05.577530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.278 [2024-05-16 09:48:05.577536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.278 [2024-05-16 09:48:05.584394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.278 [2024-05-16 09:48:05.584412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.278 [2024-05-16 09:48:05.584419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.278 [2024-05-16 09:48:05.596467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.278 [2024-05-16 09:48:05.596485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.278 [2024-05-16 09:48:05.596492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.278 [2024-05-16 09:48:05.605281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.278 [2024-05-16 09:48:05.605299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.278 [2024-05-16 09:48:05.605305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.278 [2024-05-16 09:48:05.615174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.278 [2024-05-16 09:48:05.615192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.278 [2024-05-16 09:48:05.615198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.278 [2024-05-16 09:48:05.624708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.278 [2024-05-16 09:48:05.624726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.278 [2024-05-16 09:48:05.624733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.278 [2024-05-16 09:48:05.632893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.278 [2024-05-16 09:48:05.632911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.632918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.642368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.642386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.642392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.651698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.651716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.651722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.657290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.657307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.657313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.667532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.667549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.667556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.678145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.678163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.678172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.687989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.688007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.688014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.693693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.693711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.693718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.701850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.701868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.701874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.710157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.710174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.710181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.714859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.714876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.714882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.720287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.720305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.720312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.728821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.728839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.728846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.738531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.738549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.738555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.747949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.747971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.747977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.760245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.760263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.760269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.773185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.773203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.773209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.785378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.785397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.785403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.797025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.797043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.797049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.809441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.809459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.809466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.821603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.821621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.821628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.279 [2024-05-16 09:48:05.833284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.279 [2024-05-16 09:48:05.833303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.279 [2024-05-16 09:48:05.833310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.845628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.845647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.845653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.852385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.852403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.852409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.859717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.859734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.859741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.867361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.867378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.867384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.873822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.873840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.873847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.881060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.881078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.881084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.889576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.889594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.889600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.895196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.895214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.895220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.901218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.901236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.901242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.913573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.913590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.913600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.924881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.924899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.924905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.937431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.937449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.937456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.947339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.947357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.947363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.953474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.953492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.953498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.961393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.961411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.961418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.970958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.970975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.970981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.979977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.979995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.980001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.987810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.987828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.987835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:05.998033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:05.998056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:05.998063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:06.006904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:06.006921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.541 [2024-05-16 09:48:06.006927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.541 [2024-05-16 09:48:06.015642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.541 [2024-05-16 09:48:06.015660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.542 [2024-05-16 09:48:06.015666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.542 [2024-05-16 09:48:06.027994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.542 [2024-05-16 09:48:06.028012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.542 [2024-05-16 09:48:06.028019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.542 [2024-05-16 09:48:06.034141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.542 [2024-05-16 09:48:06.034159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.542 [2024-05-16 09:48:06.034165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.542 [2024-05-16 09:48:06.039498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.542 [2024-05-16 09:48:06.039516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.542 [2024-05-16 09:48:06.039523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.542 [2024-05-16 09:48:06.048451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.542 [2024-05-16 09:48:06.048468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.542 [2024-05-16 09:48:06.048475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.542 [2024-05-16 09:48:06.055295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.542 [2024-05-16 09:48:06.055312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.542 [2024-05-16 09:48:06.055318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.542 [2024-05-16 09:48:06.060035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.542 [2024-05-16 09:48:06.060058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.542 [2024-05-16 09:48:06.060070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.542 [2024-05-16 09:48:06.065093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.542 [2024-05-16 09:48:06.065110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.542 [2024-05-16 09:48:06.065116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.542 [2024-05-16 09:48:06.070358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.542 [2024-05-16 09:48:06.070375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.542 [2024-05-16 09:48:06.070381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.542 [2024-05-16 09:48:06.080677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.542 [2024-05-16 09:48:06.080694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.542 [2024-05-16 09:48:06.080701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.542 [2024-05-16 09:48:06.092098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.542 [2024-05-16 09:48:06.092116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.542 [2024-05-16 09:48:06.092123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.804 [2024-05-16 09:48:06.102591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.804 [2024-05-16 09:48:06.102609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.804 [2024-05-16 09:48:06.102616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.804 [2024-05-16 09:48:06.113243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.804 [2024-05-16 09:48:06.113261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.804 [2024-05-16 09:48:06.113268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.804 [2024-05-16 09:48:06.122452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.804 [2024-05-16 09:48:06.122470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.804 [2024-05-16 09:48:06.122476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.804 [2024-05-16 09:48:06.132275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.132293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.132300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.139231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.139252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.139258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.149080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.149098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.149104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.156346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.156364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.156371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.164484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.164502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.164508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.172420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.172438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.172445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.182687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.182705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.182711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.189388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.189406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.189412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.195140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.195158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.195164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.205176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.205194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.205200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.215375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.215393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.215400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.225878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.225896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.225902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.235605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.235623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.235629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.241276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.241293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.241300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.250464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.250481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.250488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.257393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.257410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.257416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.265318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.265336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.265342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.276589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.276607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.276614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.284049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.284070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.284080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.288125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.288141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.288147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.295608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.295625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.295631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.305092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.305109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.305115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.313032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.313049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.313060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.325163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.325181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.325187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.337768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.337786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.337792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.350079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.350097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.350103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:12.805 [2024-05-16 09:48:06.362145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:12.805 [2024-05-16 09:48:06.362162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.805 [2024-05-16 09:48:06.362168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.067 [2024-05-16 09:48:06.374127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.067 [2024-05-16 09:48:06.374147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.067 [2024-05-16 09:48:06.374153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.067 [2024-05-16 09:48:06.386611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.067 [2024-05-16 09:48:06.386628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.067 [2024-05-16 09:48:06.386634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.067 [2024-05-16 09:48:06.398606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.067 [2024-05-16 09:48:06.398624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.067 [2024-05-16 09:48:06.398630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.067 [2024-05-16 09:48:06.411100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.067 [2024-05-16 09:48:06.411117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.067 [2024-05-16 09:48:06.411123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.067 [2024-05-16 09:48:06.422584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.067 [2024-05-16 09:48:06.422601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.067 [2024-05-16 09:48:06.422607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.067 [2024-05-16 09:48:06.432868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.067 [2024-05-16 09:48:06.432885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.067 [2024-05-16 09:48:06.432891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.067 [2024-05-16 09:48:06.438877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.067 [2024-05-16 09:48:06.438894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.067 [2024-05-16 09:48:06.438900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.067 [2024-05-16 09:48:06.444235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.067 [2024-05-16 09:48:06.444252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.067 [2024-05-16 09:48:06.444259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.067 [2024-05-16 09:48:06.454425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.067 [2024-05-16 09:48:06.454442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.067 [2024-05-16 09:48:06.454448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.067 [2024-05-16 09:48:06.462515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.067 [2024-05-16 09:48:06.462531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.067 [2024-05-16 09:48:06.462538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.067 [2024-05-16 09:48:06.470242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.067 [2024-05-16 09:48:06.470258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.067 [2024-05-16 09:48:06.470264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.067 [2024-05-16 09:48:06.478045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.067 [2024-05-16 09:48:06.478066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.478072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.068 [2024-05-16 09:48:06.487114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.068 [2024-05-16 09:48:06.487131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.487137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.068 [2024-05-16 09:48:06.493738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.068 [2024-05-16 09:48:06.493756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.493762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.068 [2024-05-16 09:48:06.505759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.068 [2024-05-16 09:48:06.505776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.505782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.068 [2024-05-16 09:48:06.517513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.068 [2024-05-16 09:48:06.517531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.517537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.068 [2024-05-16 09:48:06.524219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.068 [2024-05-16 09:48:06.524236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.524242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.068 [2024-05-16 09:48:06.533423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.068 [2024-05-16 09:48:06.533440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.533450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.068 [2024-05-16 09:48:06.540809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.068 [2024-05-16 09:48:06.540827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.540833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.068 [2024-05-16 09:48:06.546147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.068 [2024-05-16 09:48:06.546164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.546170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.068 [2024-05-16 09:48:06.556571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.068 [2024-05-16 09:48:06.556588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.556595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.068 [2024-05-16 09:48:06.562378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.068 [2024-05-16 09:48:06.562395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.562401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.068 [2024-05-16 09:48:06.572791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.068 [2024-05-16 09:48:06.572808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.572814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.068 [2024-05-16 09:48:06.577149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.068 [2024-05-16 09:48:06.577166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.577173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.068 [2024-05-16 09:48:06.587798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.068 [2024-05-16 09:48:06.587815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.587821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.068 [2024-05-16 09:48:06.598308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.068 [2024-05-16 09:48:06.598326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.598332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.068 [2024-05-16 09:48:06.604261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.068 [2024-05-16 09:48:06.604279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.604285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.068 [2024-05-16 09:48:06.612852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.068 [2024-05-16 09:48:06.612869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.612876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.068 [2024-05-16 09:48:06.618422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.068 [2024-05-16 09:48:06.618440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.618446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.068 [2024-05-16 09:48:06.625070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.068 [2024-05-16 09:48:06.625088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.068 [2024-05-16 09:48:06.625094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.635034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.635056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.635062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.644607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.644625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.644632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.651412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.651429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.651435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.659826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.659844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.659850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.667589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.667606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.667616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.676123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.676140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.676146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.683469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.683486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.683491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.688539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.688556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.688563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.699959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.699976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.699982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.711759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.711777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.711783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.722186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.722202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.722209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.733543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.733562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.733568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.745150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.745168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.745174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.755920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.755940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.755946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.767043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.767065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.767072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.778440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.778457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.778463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.789861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.789878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.789885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.796227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.796244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.796250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.807065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.807083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.807089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.818028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.818045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.818055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.829508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.829526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.829532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.840523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.840540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.840547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.852483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.852501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.852507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.862933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.862951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.862957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.868484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.868501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.330 [2024-05-16 09:48:06.868507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.330 [2024-05-16 09:48:06.880251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.330 [2024-05-16 09:48:06.880268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.331 [2024-05-16 09:48:06.880274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:06.890488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.592 [2024-05-16 09:48:06.890506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.592 [2024-05-16 09:48:06.890512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:06.900527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.592 [2024-05-16 09:48:06.900545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.592 [2024-05-16 09:48:06.900551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:06.911332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.592 [2024-05-16 09:48:06.911350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.592 [2024-05-16 09:48:06.911356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:06.919852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.592 [2024-05-16 09:48:06.919870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.592 [2024-05-16 09:48:06.919876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:06.931179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.592 [2024-05-16 09:48:06.931197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.592 [2024-05-16 09:48:06.931206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:06.941858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.592 [2024-05-16 09:48:06.941876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.592 [2024-05-16 09:48:06.941882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:06.954900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.592 [2024-05-16 09:48:06.954918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.592 [2024-05-16 09:48:06.954924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:06.967238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.592 [2024-05-16 09:48:06.967255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.592 [2024-05-16 09:48:06.967261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:06.979619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.592 [2024-05-16 09:48:06.979637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.592 [2024-05-16 09:48:06.979643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:06.991263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.592 [2024-05-16 09:48:06.991281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.592 [2024-05-16 09:48:06.991287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:07.001140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.592 [2024-05-16 09:48:07.001158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.592 [2024-05-16 09:48:07.001164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:07.011044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.592 [2024-05-16 09:48:07.011065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.592 [2024-05-16 09:48:07.011072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:07.022717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.592 [2024-05-16 09:48:07.022734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.592 [2024-05-16 09:48:07.022740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:07.033962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.592 [2024-05-16 09:48:07.033984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.592 [2024-05-16 09:48:07.033990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:07.045974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.592 [2024-05-16 09:48:07.045993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.592 [2024-05-16 09:48:07.045999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:07.057097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.592 [2024-05-16 09:48:07.057114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.592 [2024-05-16 09:48:07.057120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:07.068822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.592 [2024-05-16 09:48:07.068839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.592 [2024-05-16 09:48:07.068846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:07.081570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.592 [2024-05-16 09:48:07.081588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.592 [2024-05-16 09:48:07.081594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.592 [2024-05-16 09:48:07.093839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.593 [2024-05-16 09:48:07.093856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.593 [2024-05-16 09:48:07.093863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.593 [2024-05-16 09:48:07.103891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.593 [2024-05-16 09:48:07.103908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.593 [2024-05-16 09:48:07.103915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.593 [2024-05-16 09:48:07.114391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.593 [2024-05-16 09:48:07.114409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.593 [2024-05-16 09:48:07.114415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.593 [2024-05-16 09:48:07.125131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.593 [2024-05-16 09:48:07.125149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.593 [2024-05-16 09:48:07.125155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.593 [2024-05-16 09:48:07.136397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.593 [2024-05-16 09:48:07.136414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.593 [2024-05-16 09:48:07.136420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.593 [2024-05-16 09:48:07.144859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.593 [2024-05-16 09:48:07.144877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.593 [2024-05-16 09:48:07.144883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.855 [2024-05-16 09:48:07.155743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.855 [2024-05-16 09:48:07.155760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.855 [2024-05-16 09:48:07.155766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.855 [2024-05-16 09:48:07.167556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.855 [2024-05-16 09:48:07.167574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.855 [2024-05-16 09:48:07.167580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.855 [2024-05-16 09:48:07.179188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.855 [2024-05-16 09:48:07.179205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.855 [2024-05-16 09:48:07.179211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.855 [2024-05-16 09:48:07.189337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.855 [2024-05-16 09:48:07.189354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.855 [2024-05-16 09:48:07.189360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.855 [2024-05-16 09:48:07.199145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.855 [2024-05-16 09:48:07.199162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.855 [2024-05-16 09:48:07.199169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.855 [2024-05-16 09:48:07.209131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.855 [2024-05-16 09:48:07.209148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.855 [2024-05-16 09:48:07.209154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.855 [2024-05-16 09:48:07.220905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.855 [2024-05-16 09:48:07.220923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.855 [2024-05-16 09:48:07.220932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.855 [2024-05-16 09:48:07.231448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.855 [2024-05-16 09:48:07.231466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.855 [2024-05-16 09:48:07.231473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.855 [2024-05-16 09:48:07.240459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.855 [2024-05-16 09:48:07.240477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.855 [2024-05-16 09:48:07.240483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.855 [2024-05-16 09:48:07.251776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.855 [2024-05-16 09:48:07.251794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.855 [2024-05-16 09:48:07.251800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.855 [2024-05-16 09:48:07.262333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.855 [2024-05-16 09:48:07.262351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.855 [2024-05-16 09:48:07.262357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.855 [2024-05-16 09:48:07.274186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.855 [2024-05-16 09:48:07.274204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.855 [2024-05-16 09:48:07.274211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.855 [2024-05-16 09:48:07.284655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.855 [2024-05-16 09:48:07.284672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.855 [2024-05-16 09:48:07.284679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.855 [2024-05-16 09:48:07.295076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.855 [2024-05-16 09:48:07.295094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.855 [2024-05-16 09:48:07.295102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.856 [2024-05-16 09:48:07.306821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.856 [2024-05-16 09:48:07.306840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.856 [2024-05-16 09:48:07.306846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.856 [2024-05-16 09:48:07.316004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.856 [2024-05-16 09:48:07.316028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.856 [2024-05-16 09:48:07.316035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.856 [2024-05-16 09:48:07.327008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.856 [2024-05-16 09:48:07.327026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.856 [2024-05-16 09:48:07.327033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.856 [2024-05-16 09:48:07.338685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.856 [2024-05-16 09:48:07.338703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.856 [2024-05-16 09:48:07.338710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.856 [2024-05-16 09:48:07.349368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.856 [2024-05-16 09:48:07.349385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.856 [2024-05-16 09:48:07.349392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.856 [2024-05-16 09:48:07.361038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.856 [2024-05-16 09:48:07.361060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.856 [2024-05-16 09:48:07.361067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.856 [2024-05-16 09:48:07.371661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.856 [2024-05-16 09:48:07.371679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.856 [2024-05-16 09:48:07.371685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.856 [2024-05-16 09:48:07.381873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.856 [2024-05-16 09:48:07.381890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.856 [2024-05-16 09:48:07.381897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.856 [2024-05-16 09:48:07.392547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.856 [2024-05-16 09:48:07.392565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.856 [2024-05-16 09:48:07.392571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.856 [2024-05-16 09:48:07.403028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.856 [2024-05-16 09:48:07.403046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.856 [2024-05-16 09:48:07.403060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.856 [2024-05-16 09:48:07.412339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:13.856 [2024-05-16 09:48:07.412356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.856 [2024-05-16 09:48:07.412363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.117 [2024-05-16 09:48:07.422490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:14.117 [2024-05-16 09:48:07.422509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.117 [2024-05-16 09:48:07.422515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.117 [2024-05-16 09:48:07.433751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:14.117 [2024-05-16 09:48:07.433768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.117 [2024-05-16 09:48:07.433775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.117 [2024-05-16 09:48:07.441391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:14.117 [2024-05-16 09:48:07.441409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.117 [2024-05-16 09:48:07.441415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.117 [2024-05-16 09:48:07.447205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:14.117 [2024-05-16 09:48:07.447223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.117 [2024-05-16 09:48:07.447229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.117 [2024-05-16 09:48:07.452696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:14.117 [2024-05-16 09:48:07.452714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.118 [2024-05-16 09:48:07.452720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.118 [2024-05-16 09:48:07.461173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13b1390) 00:35:14.118 [2024-05-16 09:48:07.461191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.118 [2024-05-16 09:48:07.461197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.118 00:35:14.118 Latency(us) 00:35:14.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.118 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:14.118 nvme0n1 : 2.00 3282.82 410.35 0.00 0.00 4870.61 662.19 13161.81 00:35:14.118 =================================================================================================================== 00:35:14.118 Total : 3282.82 410.35 0.00 0.00 4870.61 662.19 13161.81 00:35:14.118 0 00:35:14.118 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:14.118 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:14.118 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:14.118 | .driver_specific 00:35:14.118 | .nvme_error 00:35:14.118 | .status_code 00:35:14.118 | .command_transient_transport_error' 00:35:14.118 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:14.118 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 211 > 0 )) 00:35:14.118 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 524148 00:35:14.118 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 524148 ']' 00:35:14.118 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 524148 00:35:14.118 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:35:14.118 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:14.118 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 524148 00:35:14.378 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:14.378 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:14.378 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 524148' 00:35:14.379 killing process with pid 524148 00:35:14.379 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 524148 00:35:14.379 Received shutdown signal, test time was about 2.000000 seconds 00:35:14.379 00:35:14.379 Latency(us) 00:35:14.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.379 =================================================================================================================== 00:35:14.379 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:14.379 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 524148 00:35:14.379 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:14.379 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:14.379 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:14.379 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:14.379 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:14.379 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=525097 00:35:14.379 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 525097 /var/tmp/bperf.sock 00:35:14.379 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 525097 ']' 00:35:14.379 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:14.379 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:14.379 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:14.379 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:14.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:14.379 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:14.379 09:48:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:14.379 [2024-05-16 09:48:07.872336] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:35:14.379 [2024-05-16 09:48:07.872389] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525097 ] 00:35:14.379 EAL: No free 2048 kB hugepages reported on node 1 00:35:14.639 [2024-05-16 09:48:07.946158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.639 [2024-05-16 09:48:07.999228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.210 09:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:15.210 09:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:35:15.210 09:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:15.210 09:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:15.471 09:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:15.471 09:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.471 09:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.471 09:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.471 09:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.471 09:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.731 nvme0n1 00:35:15.731 09:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:15.731 09:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.731 09:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.731 09:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.731 09:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:15.731 09:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:15.731 Running I/O for 2 seconds... 00:35:15.731 [2024-05-16 09:48:09.259188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190eb760 00:35:15.731 [2024-05-16 09:48:09.260938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.731 [2024-05-16 09:48:09.260965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:15.731 [2024-05-16 09:48:09.270150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e3d08 00:35:15.731 [2024-05-16 09:48:09.271516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.731 [2024-05-16 09:48:09.271534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:15.731 [2024-05-16 09:48:09.283076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190df550 00:35:15.731 [2024-05-16 09:48:09.284781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.731 [2024-05-16 09:48:09.284802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.293202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ecc78 00:35:15.993 [2024-05-16 09:48:09.294548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.294564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.305802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fac10 00:35:15.993 [2024-05-16 09:48:09.307182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.307199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.319083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f2d80 00:35:15.993 [2024-05-16 09:48:09.321050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.321072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.329670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fc998 00:35:15.993 [2024-05-16 09:48:09.331187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.331204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.339292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fc128 00:35:15.993 [2024-05-16 09:48:09.340140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.340156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.351008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e12d8 00:35:15.993 [2024-05-16 09:48:09.351883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.351900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.364262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e12d8 00:35:15.993 [2024-05-16 09:48:09.365730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.365747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.373729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190feb58 00:35:15.993 [2024-05-16 09:48:09.374592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.374608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.386220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fbcf0 00:35:15.993 [2024-05-16 09:48:09.387106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.387123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.397217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ff3c8 00:35:15.993 [2024-05-16 09:48:09.398028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.398044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.411752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f20d8 00:35:15.993 [2024-05-16 09:48:09.413419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.413435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.421991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f0788 00:35:15.993 [2024-05-16 09:48:09.423029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.423044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.433735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ee190 00:35:15.993 [2024-05-16 09:48:09.434758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.434774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.445497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f7538 00:35:15.993 [2024-05-16 09:48:09.446532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.446548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.456642] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f7da8 00:35:15.993 [2024-05-16 09:48:09.457667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.457683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.469270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ee190 00:35:15.993 [2024-05-16 09:48:09.470293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.470310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.480958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ed0b0 00:35:15.993 [2024-05-16 09:48:09.481987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.482004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.492670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f0788 00:35:15.993 [2024-05-16 09:48:09.493688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.493704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.505928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190de470 00:35:15.993 [2024-05-16 09:48:09.507588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.507604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.516532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e49b0 00:35:15.993 [2024-05-16 09:48:09.517714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.517730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.527972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f92c0 00:35:15.993 [2024-05-16 09:48:09.529117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.529133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:15.993 [2024-05-16 09:48:09.540765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f0350 00:35:15.993 [2024-05-16 09:48:09.542221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:15.993 [2024-05-16 09:48:09.542238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.254 [2024-05-16 09:48:09.553668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e1b48 00:35:16.254 [2024-05-16 09:48:09.555487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.254 [2024-05-16 09:48:09.555502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.254 [2024-05-16 09:48:09.563102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e88f8 00:35:16.254 [2024-05-16 09:48:09.564237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.254 [2024-05-16 09:48:09.564253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:16.254 [2024-05-16 09:48:09.575941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ef270 00:35:16.254 [2024-05-16 09:48:09.577393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.254 [2024-05-16 09:48:09.577408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.254 [2024-05-16 09:48:09.587532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ed0b0 00:35:16.254 [2024-05-16 09:48:09.588934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.254 [2024-05-16 09:48:09.588950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:16.254 [2024-05-16 09:48:09.598062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190eaef0 00:35:16.255 [2024-05-16 09:48:09.599117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.599133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:16.255 [2024-05-16 09:48:09.612022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e7c50 00:35:16.255 [2024-05-16 09:48:09.613761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.613777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.255 [2024-05-16 09:48:09.621756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e73e0 00:35:16.255 [2024-05-16 09:48:09.622847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.622864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:16.255 [2024-05-16 09:48:09.633469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e73e0 00:35:16.255 [2024-05-16 09:48:09.634573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.634588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:16.255 [2024-05-16 09:48:09.646656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e73e0 00:35:16.255 [2024-05-16 09:48:09.648374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.648390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:16.255 [2024-05-16 09:48:09.656874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fc128 00:35:16.255 [2024-05-16 09:48:09.657967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.657984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:16.255 [2024-05-16 09:48:09.668627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f57b0 00:35:16.255 [2024-05-16 09:48:09.669710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.669726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:16.255 [2024-05-16 09:48:09.680384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ec408 00:35:16.255 [2024-05-16 09:48:09.681473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.681489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:16.255 [2024-05-16 09:48:09.692108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f0350 00:35:16.255 [2024-05-16 09:48:09.693195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.693214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:16.255 [2024-05-16 09:48:09.703851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f1430 00:35:16.255 [2024-05-16 09:48:09.704933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.704949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:16.255 [2024-05-16 09:48:09.715591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e3060 00:35:16.255 [2024-05-16 09:48:09.716681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.716697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:16.255 [2024-05-16 09:48:09.727311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190eff18 00:35:16.255 [2024-05-16 09:48:09.728400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.728416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:16.255 [2024-05-16 09:48:09.739073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190de038 00:35:16.255 [2024-05-16 09:48:09.740127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.740142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:16.255 [2024-05-16 09:48:09.752319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ed4e8 00:35:16.255 [2024-05-16 09:48:09.754036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.754056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:16.255 [2024-05-16 09:48:09.762939] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e1710 00:35:16.255 [2024-05-16 09:48:09.764141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.764157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:16.255 [2024-05-16 09:48:09.774412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e73e0 00:35:16.255 [2024-05-16 09:48:09.775647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.775663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:16.255 [2024-05-16 09:48:09.787193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fef90 00:35:16.255 [2024-05-16 09:48:09.788704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.788719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:16.255 [2024-05-16 09:48:09.796964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fb8b8 00:35:16.255 [2024-05-16 09:48:09.797836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.797852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:16.255 [2024-05-16 09:48:09.810237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e6300 00:35:16.255 [2024-05-16 09:48:09.811741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.255 [2024-05-16 09:48:09.811756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:16.516 [2024-05-16 09:48:09.820848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f57b0 00:35:16.516 [2024-05-16 09:48:09.821871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.516 [2024-05-16 09:48:09.821887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:16.516 [2024-05-16 09:48:09.832158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e4578 00:35:16.516 [2024-05-16 09:48:09.833173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.516 [2024-05-16 09:48:09.833189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:16.516 [2024-05-16 09:48:09.845025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fa3a0 00:35:16.516 [2024-05-16 09:48:09.846257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.516 [2024-05-16 09:48:09.846273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:16.516 [2024-05-16 09:48:09.855993] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e88f8 00:35:16.516 [2024-05-16 09:48:09.857172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.516 [2024-05-16 09:48:09.857187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:16.516 [2024-05-16 09:48:09.870205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e5220 00:35:16.516 [2024-05-16 09:48:09.872024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.516 [2024-05-16 09:48:09.872039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:16.516 [2024-05-16 09:48:09.879989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fda78 00:35:16.516 [2024-05-16 09:48:09.881291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.516 [2024-05-16 09:48:09.881306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:16.516 [2024-05-16 09:48:09.890585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190feb58 00:35:16.516 [2024-05-16 09:48:09.891448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.516 [2024-05-16 09:48:09.891464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:16.516 [2024-05-16 09:48:09.902462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f7538 00:35:16.516 [2024-05-16 09:48:09.903330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.516 [2024-05-16 09:48:09.903346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:16.516 [2024-05-16 09:48:09.914125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f9f68 00:35:16.516 [2024-05-16 09:48:09.914979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.516 [2024-05-16 09:48:09.914994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.516 [2024-05-16 09:48:09.925842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f9f68 00:35:16.516 [2024-05-16 09:48:09.926693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.516 [2024-05-16 09:48:09.926710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.516 [2024-05-16 09:48:09.937527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f9f68 00:35:16.516 [2024-05-16 09:48:09.938375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.516 [2024-05-16 09:48:09.938390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:16.516 [2024-05-16 09:48:09.950888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f20d8 00:35:16.516 [2024-05-16 09:48:09.952333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.516 [2024-05-16 09:48:09.952349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:16.516 [2024-05-16 09:48:09.960688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ecc78 00:35:16.516 [2024-05-16 09:48:09.961681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.516 [2024-05-16 09:48:09.961697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:16.516 [2024-05-16 09:48:09.975225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e88f8 00:35:16.516 [2024-05-16 09:48:09.977022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.516 [2024-05-16 09:48:09.977038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:16.516 [2024-05-16 09:48:09.986122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e1f80 00:35:16.516 [2024-05-16 09:48:09.987579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.516 [2024-05-16 09:48:09.987594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.516 [2024-05-16 09:48:09.998990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f1ca0 00:35:16.516 [2024-05-16 09:48:10.000804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.516 [2024-05-16 09:48:10.000822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.516 [2024-05-16 09:48:10.009985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e6738 00:35:16.516 [2024-05-16 09:48:10.011455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.516 [2024-05-16 09:48:10.011472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.516 [2024-05-16 09:48:10.021367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fbcf0 00:35:16.516 [2024-05-16 09:48:10.022273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.517 [2024-05-16 09:48:10.022289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.517 [2024-05-16 09:48:10.033450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e8088 00:35:16.517 [2024-05-16 09:48:10.034878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.517 [2024-05-16 09:48:10.034894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.517 [2024-05-16 09:48:10.046360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e23b8 00:35:16.517 [2024-05-16 09:48:10.048181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.517 [2024-05-16 09:48:10.048198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.517 [2024-05-16 09:48:10.055787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fe720 00:35:16.517 [2024-05-16 09:48:10.056945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.517 [2024-05-16 09:48:10.056961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:16.517 [2024-05-16 09:48:10.068575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f4b08 00:35:16.517 [2024-05-16 09:48:10.070026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.517 [2024-05-16 09:48:10.070041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.778 [2024-05-16 09:48:10.081461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e6b70 00:35:16.778 [2024-05-16 09:48:10.083250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.778 [2024-05-16 09:48:10.083266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.778 [2024-05-16 09:48:10.090889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190edd58 00:35:16.778 [2024-05-16 09:48:10.092046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.778 [2024-05-16 09:48:10.092064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:16.778 [2024-05-16 09:48:10.104878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fc998 00:35:16.778 [2024-05-16 09:48:10.106591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.778 [2024-05-16 09:48:10.106607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.778 [2024-05-16 09:48:10.114304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e99d8 00:35:16.778 [2024-05-16 09:48:10.115466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.778 [2024-05-16 09:48:10.115481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:16.778 [2024-05-16 09:48:10.127104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e84c0 00:35:16.778 [2024-05-16 09:48:10.128554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.778 [2024-05-16 09:48:10.128570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.778 [2024-05-16 09:48:10.140614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e2c28 00:35:16.778 [2024-05-16 09:48:10.142703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.778 [2024-05-16 09:48:10.142719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.778 [2024-05-16 09:48:10.150860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e4de8 00:35:16.778 [2024-05-16 09:48:10.152352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.778 [2024-05-16 09:48:10.152367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.778 [2024-05-16 09:48:10.162630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f0788 00:35:16.778 [2024-05-16 09:48:10.164109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.778 [2024-05-16 09:48:10.164125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.778 [2024-05-16 09:48:10.174482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ec840 00:35:16.778 [2024-05-16 09:48:10.175943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.778 [2024-05-16 09:48:10.175959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.778 [2024-05-16 09:48:10.186225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190dece0 00:35:16.778 [2024-05-16 09:48:10.187649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.778 [2024-05-16 09:48:10.187665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.778 [2024-05-16 09:48:10.198255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f0ff8 00:35:16.778 [2024-05-16 09:48:10.199683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.778 [2024-05-16 09:48:10.199698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.779 [2024-05-16 09:48:10.209994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f0788 00:35:16.779 [2024-05-16 09:48:10.211441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.779 [2024-05-16 09:48:10.211457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:16.779 [2024-05-16 09:48:10.220743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e2c28 00:35:16.779 [2024-05-16 09:48:10.221812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.779 [2024-05-16 09:48:10.221828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:16.779 [2024-05-16 09:48:10.233248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e95a0 00:35:16.779 [2024-05-16 09:48:10.234850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.779 [2024-05-16 09:48:10.234865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:16.779 [2024-05-16 09:48:10.243820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f4f40 00:35:16.779 [2024-05-16 09:48:10.244947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.779 [2024-05-16 09:48:10.244963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:16.779 [2024-05-16 09:48:10.255676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f0788 00:35:16.779 [2024-05-16 09:48:10.256793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.779 [2024-05-16 09:48:10.256809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:16.779 [2024-05-16 09:48:10.267377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f0788 00:35:16.779 [2024-05-16 09:48:10.268505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.779 [2024-05-16 09:48:10.268522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:16.779 [2024-05-16 09:48:10.278288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f6020 00:35:16.779 [2024-05-16 09:48:10.279357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.779 [2024-05-16 09:48:10.279372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:16.779 [2024-05-16 09:48:10.292302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f3e60 00:35:16.779 [2024-05-16 09:48:10.294066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.779 [2024-05-16 09:48:10.294082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:16.779 [2024-05-16 09:48:10.304351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190eb760 00:35:16.779 [2024-05-16 09:48:10.306069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.779 [2024-05-16 09:48:10.306087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:16.779 [2024-05-16 09:48:10.314424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fc560 00:35:16.779 [2024-05-16 09:48:10.315816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.779 [2024-05-16 09:48:10.315832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:16.779 [2024-05-16 09:48:10.325292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ec840 00:35:16.779 [2024-05-16 09:48:10.326340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.779 [2024-05-16 09:48:10.326356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.040 [2024-05-16 09:48:10.338820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fc998 00:35:17.040 [2024-05-16 09:48:10.340503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.040 [2024-05-16 09:48:10.340518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.040 [2024-05-16 09:48:10.349426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fda78 00:35:17.040 [2024-05-16 09:48:10.350624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.040 [2024-05-16 09:48:10.350640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.040 [2024-05-16 09:48:10.360732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f7970 00:35:17.040 [2024-05-16 09:48:10.361887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.040 [2024-05-16 09:48:10.361902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.040 [2024-05-16 09:48:10.373596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fac10 00:35:17.040 [2024-05-16 09:48:10.374961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.040 [2024-05-16 09:48:10.374977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.040 [2024-05-16 09:48:10.385343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f3e60 00:35:17.040 [2024-05-16 09:48:10.386671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.040 [2024-05-16 09:48:10.386686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.040 [2024-05-16 09:48:10.397106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e27f0 00:35:17.040 [2024-05-16 09:48:10.398463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.040 [2024-05-16 09:48:10.398479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.040 [2024-05-16 09:48:10.407826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ee190 00:35:17.040 [2024-05-16 09:48:10.408800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.040 [2024-05-16 09:48:10.408816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.040 [2024-05-16 09:48:10.420338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190df988 00:35:17.040 [2024-05-16 09:48:10.421853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.040 [2024-05-16 09:48:10.421868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.040 [2024-05-16 09:48:10.429792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f4f40 00:35:17.040 [2024-05-16 09:48:10.430671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.040 [2024-05-16 09:48:10.430687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.040 [2024-05-16 09:48:10.442317] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e9168 00:35:17.040 [2024-05-16 09:48:10.443204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.040 [2024-05-16 09:48:10.443220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.041 [2024-05-16 09:48:10.455752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190df988 00:35:17.041 [2024-05-16 09:48:10.457237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.041 [2024-05-16 09:48:10.457252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.041 [2024-05-16 09:48:10.466030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ee5c8 00:35:17.041 [2024-05-16 09:48:10.466915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.041 [2024-05-16 09:48:10.466931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.041 [2024-05-16 09:48:10.479369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ea248 00:35:17.041 [2024-05-16 09:48:10.480877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.041 [2024-05-16 09:48:10.480894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.041 [2024-05-16 09:48:10.489601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e3d08 00:35:17.041 [2024-05-16 09:48:10.490443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.041 [2024-05-16 09:48:10.490459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.041 [2024-05-16 09:48:10.500746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f5be8 00:35:17.041 [2024-05-16 09:48:10.501607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.041 [2024-05-16 09:48:10.501622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.041 [2024-05-16 09:48:10.515263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190df988 00:35:17.041 [2024-05-16 09:48:10.516937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.041 [2024-05-16 09:48:10.516953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.041 [2024-05-16 09:48:10.525641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e9168 00:35:17.041 [2024-05-16 09:48:10.526838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.041 [2024-05-16 09:48:10.526854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.041 [2024-05-16 09:48:10.536934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e1b48 00:35:17.041 [2024-05-16 09:48:10.538106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.041 [2024-05-16 09:48:10.538122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.041 [2024-05-16 09:48:10.549200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f1868 00:35:17.041 [2024-05-16 09:48:10.550366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.041 [2024-05-16 09:48:10.550383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.041 [2024-05-16 09:48:10.562385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e1b48 00:35:17.041 [2024-05-16 09:48:10.564210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.041 [2024-05-16 09:48:10.564225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.041 [2024-05-16 09:48:10.572975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e1710 00:35:17.041 [2024-05-16 09:48:10.574317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.041 [2024-05-16 09:48:10.574333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.041 [2024-05-16 09:48:10.584848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ebfd0 00:35:17.041 [2024-05-16 09:48:10.586186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.041 [2024-05-16 09:48:10.586202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.041 [2024-05-16 09:48:10.596603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190edd58 00:35:17.041 [2024-05-16 09:48:10.597957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.041 [2024-05-16 09:48:10.597972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.607780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f2948 00:35:17.303 [2024-05-16 09:48:10.609113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.609132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.617566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190dece0 00:35:17.303 [2024-05-16 09:48:10.618412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.618428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.631567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e6b70 00:35:17.303 [2024-05-16 09:48:10.633063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.633079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.642445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f1868 00:35:17.303 [2024-05-16 09:48:10.643583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.643598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.655316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e38d0 00:35:17.303 [2024-05-16 09:48:10.656810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.656826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.665404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ea248 00:35:17.303 [2024-05-16 09:48:10.666531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.666547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.678879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ebfd0 00:35:17.303 [2024-05-16 09:48:10.680268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.680283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.689779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f2d80 00:35:17.303 [2024-05-16 09:48:10.691184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.691199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.700370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f96f8 00:35:17.303 [2024-05-16 09:48:10.701257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.701273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.712281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f3a28 00:35:17.303 [2024-05-16 09:48:10.713199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.713215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.725526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f1868 00:35:17.303 [2024-05-16 09:48:10.727085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.727102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.736126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f0350 00:35:17.303 [2024-05-16 09:48:10.737185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.737201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.748057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ec840 00:35:17.303 [2024-05-16 09:48:10.749115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.749131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.761313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190eaab8 00:35:17.303 [2024-05-16 09:48:10.763011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.763026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.770752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190dfdc0 00:35:17.303 [2024-05-16 09:48:10.771824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.771839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.784794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fe720 00:35:17.303 [2024-05-16 09:48:10.786525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.786541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.795664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e8088 00:35:17.303 [2024-05-16 09:48:10.797038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.797058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.808547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e73e0 00:35:17.303 [2024-05-16 09:48:10.810258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.810273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.817995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fb048 00:35:17.303 [2024-05-16 09:48:10.819069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.819085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.830804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190feb58 00:35:17.303 [2024-05-16 09:48:10.832184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.832200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.843671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fa3a0 00:35:17.303 [2024-05-16 09:48:10.845399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.845416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.303 [2024-05-16 09:48:10.853536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f6020 00:35:17.303 [2024-05-16 09:48:10.854521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.303 [2024-05-16 09:48:10.854537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.566 [2024-05-16 09:48:10.866048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ea248 00:35:17.566 [2024-05-16 09:48:10.867573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.566 [2024-05-16 09:48:10.867589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.566 [2024-05-16 09:48:10.875496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e6fa8 00:35:17.566 [2024-05-16 09:48:10.876362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.566 [2024-05-16 09:48:10.876377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.566 [2024-05-16 09:48:10.889519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e2c28 00:35:17.566 [2024-05-16 09:48:10.891049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.566 [2024-05-16 09:48:10.891067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.566 [2024-05-16 09:48:10.899083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190eee38 00:35:17.566 [2024-05-16 09:48:10.899962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.566 [2024-05-16 09:48:10.899979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.566 [2024-05-16 09:48:10.913048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190efae0 00:35:17.566 [2024-05-16 09:48:10.914592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.566 [2024-05-16 09:48:10.914610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.566 [2024-05-16 09:48:10.923670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f20d8 00:35:17.566 [2024-05-16 09:48:10.924737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.566 [2024-05-16 09:48:10.924753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.566 [2024-05-16 09:48:10.934909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f9f68 00:35:17.566 [2024-05-16 09:48:10.935806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.566 [2024-05-16 09:48:10.935822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.566 [2024-05-16 09:48:10.945832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f5be8 00:35:17.566 [2024-05-16 09:48:10.946703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.566 [2024-05-16 09:48:10.946718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.566 [2024-05-16 09:48:10.957478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190eb328 00:35:17.566 [2024-05-16 09:48:10.958307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.566 [2024-05-16 09:48:10.958322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.566 [2024-05-16 09:48:10.973627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fb8b8 00:35:17.566 [2024-05-16 09:48:10.975746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.566 [2024-05-16 09:48:10.975761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.566 [2024-05-16 09:48:10.984222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fa3a0 00:35:17.566 [2024-05-16 09:48:10.985740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.566 [2024-05-16 09:48:10.985756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.566 [2024-05-16 09:48:10.995293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190ef6a8 00:35:17.566 [2024-05-16 09:48:10.996784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.566 [2024-05-16 09:48:10.996799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.566 [2024-05-16 09:48:11.006924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e5220 00:35:17.566 [2024-05-16 09:48:11.008518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.566 [2024-05-16 09:48:11.008534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.566 [2024-05-16 09:48:11.017515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fc560 00:35:17.566 [2024-05-16 09:48:11.018601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.566 [2024-05-16 09:48:11.018617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.566 [2024-05-16 09:48:11.030930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f92c0 00:35:17.566 [2024-05-16 09:48:11.032693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.566 [2024-05-16 09:48:11.032708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.566 [2024-05-16 09:48:11.041122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e3498 00:35:17.566 [2024-05-16 09:48:11.042235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.566 [2024-05-16 09:48:11.042251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.566 [2024-05-16 09:48:11.052081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e73e0 00:35:17.566 [2024-05-16 09:48:11.053166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.566 [2024-05-16 09:48:11.053182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.566 [2024-05-16 09:48:11.066681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fa3a0 00:35:17.566 [2024-05-16 09:48:11.068553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.567 [2024-05-16 09:48:11.068569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.567 [2024-05-16 09:48:11.076842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e8d30 00:35:17.567 [2024-05-16 09:48:11.078077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.567 [2024-05-16 09:48:11.078092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.567 [2024-05-16 09:48:11.087792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f8a50 00:35:17.567 [2024-05-16 09:48:11.089046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.567 [2024-05-16 09:48:11.089065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.567 [2024-05-16 09:48:11.099580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190df988 00:35:17.567 [2024-05-16 09:48:11.100199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.567 [2024-05-16 09:48:11.100216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.567 [2024-05-16 09:48:11.110778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e1f80 00:35:17.567 [2024-05-16 09:48:11.111663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.567 [2024-05-16 09:48:11.111679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.567 [2024-05-16 09:48:11.124002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f1868 00:35:17.567 [2024-05-16 09:48:11.125448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.567 [2024-05-16 09:48:11.125464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.829 [2024-05-16 09:48:11.134275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e88f8 00:35:17.829 [2024-05-16 09:48:11.135028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.829 [2024-05-16 09:48:11.135043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.829 [2024-05-16 09:48:11.147468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e6738 00:35:17.829 [2024-05-16 09:48:11.148945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.829 [2024-05-16 09:48:11.148961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.829 [2024-05-16 09:48:11.157671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f57b0 00:35:17.830 [2024-05-16 09:48:11.158542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-05-16 09:48:11.158558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.830 [2024-05-16 09:48:11.169419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190f5be8 00:35:17.830 [2024-05-16 09:48:11.170277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-05-16 09:48:11.170293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.830 [2024-05-16 09:48:11.180364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e6fa8 00:35:17.830 [2024-05-16 09:48:11.181199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-05-16 09:48:11.181215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.830 [2024-05-16 09:48:11.194505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e6738 00:35:17.830 [2024-05-16 09:48:11.195994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-05-16 09:48:11.196010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.830 [2024-05-16 09:48:11.204760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e6738 00:35:17.830 [2024-05-16 09:48:11.205620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-05-16 09:48:11.205636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.830 [2024-05-16 09:48:11.216488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fb8b8 00:35:17.830 [2024-05-16 09:48:11.217294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-05-16 09:48:11.217313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.830 [2024-05-16 09:48:11.229719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190e6738 00:35:17.830 [2024-05-16 09:48:11.231192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-05-16 09:48:11.231208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.830 [2024-05-16 09:48:11.240319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f6220) with pdu=0x2000190fa3a0 00:35:17.830 [2024-05-16 09:48:11.241322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-05-16 09:48:11.241339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.830 00:35:17.830 Latency(us) 00:35:17.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.830 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:17.830 nvme0n1 : 2.00 21669.21 84.65 0.00 0.00 5899.06 2143.57 17148.59 00:35:17.830 =================================================================================================================== 00:35:17.830 Total : 21669.21 84.65 0.00 0.00 5899.06 2143.57 17148.59 00:35:17.830 0 00:35:17.830 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:17.830 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:17.830 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:17.830 | .driver_specific 00:35:17.830 | .nvme_error 00:35:17.830 | .status_code 00:35:17.830 | .command_transient_transport_error' 00:35:17.830 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:18.090 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:35:18.090 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 525097 00:35:18.090 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 525097 ']' 00:35:18.090 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 525097 00:35:18.090 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:35:18.090 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:18.090 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 525097 00:35:18.090 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:18.090 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:18.090 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 525097' 00:35:18.090 killing process with pid 525097 00:35:18.090 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 525097 00:35:18.090 Received shutdown signal, test time was about 2.000000 seconds 00:35:18.090 00:35:18.090 Latency(us) 00:35:18.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.090 =================================================================================================================== 00:35:18.091 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:18.091 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 525097 00:35:18.091 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:18.091 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:18.091 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:18.091 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:18.091 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:18.091 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=526142 00:35:18.091 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 526142 /var/tmp/bperf.sock 00:35:18.091 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 526142 ']' 00:35:18.091 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:18.091 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:18.091 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:18.091 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:18.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:18.091 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:18.091 09:48:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.091 [2024-05-16 09:48:11.638894] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:35:18.091 [2024-05-16 09:48:11.638952] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid526142 ] 00:35:18.091 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:18.091 Zero copy mechanism will not be used. 00:35:18.351 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.351 [2024-05-16 09:48:11.713847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.351 [2024-05-16 09:48:11.767060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.922 09:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:18.922 09:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:35:18.922 09:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:18.922 09:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:19.182 09:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:19.182 09:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.182 09:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.182 09:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.182 09:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:19.182 09:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:19.442 nvme0n1 00:35:19.442 09:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:19.442 09:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.442 09:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.442 09:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.442 09:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:19.442 09:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:19.443 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:19.443 Zero copy mechanism will not be used. 00:35:19.443 Running I/O for 2 seconds... 00:35:19.443 [2024-05-16 09:48:12.911545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.443 [2024-05-16 09:48:12.911916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.443 [2024-05-16 09:48:12.911943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.443 [2024-05-16 09:48:12.919919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.443 [2024-05-16 09:48:12.920260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.443 [2024-05-16 09:48:12.920280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.443 [2024-05-16 09:48:12.930357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.443 [2024-05-16 09:48:12.930702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.443 [2024-05-16 09:48:12.930721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.443 [2024-05-16 09:48:12.939828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.443 [2024-05-16 09:48:12.940165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.443 [2024-05-16 09:48:12.940183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.443 [2024-05-16 09:48:12.951894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.443 [2024-05-16 09:48:12.952198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.443 [2024-05-16 09:48:12.952217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.443 [2024-05-16 09:48:12.963543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.443 [2024-05-16 09:48:12.963926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.443 [2024-05-16 09:48:12.963943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.443 [2024-05-16 09:48:12.975788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.443 [2024-05-16 09:48:12.976155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.443 [2024-05-16 09:48:12.976173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.443 [2024-05-16 09:48:12.985671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.443 [2024-05-16 09:48:12.986007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.443 [2024-05-16 09:48:12.986024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.443 [2024-05-16 09:48:12.994526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.443 [2024-05-16 09:48:12.994738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.443 [2024-05-16 09:48:12.994754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.443 [2024-05-16 09:48:12.999332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.443 [2024-05-16 09:48:12.999539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.443 [2024-05-16 09:48:12.999556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.704 [2024-05-16 09:48:13.004551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.704 [2024-05-16 09:48:13.004752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.704 [2024-05-16 09:48:13.004768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.704 [2024-05-16 09:48:13.011841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.704 [2024-05-16 09:48:13.012041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.704 [2024-05-16 09:48:13.012064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.704 [2024-05-16 09:48:13.015957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.704 [2024-05-16 09:48:13.016161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.704 [2024-05-16 09:48:13.016177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.704 [2024-05-16 09:48:13.020236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.704 [2024-05-16 09:48:13.020445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.704 [2024-05-16 09:48:13.020461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.704 [2024-05-16 09:48:13.024348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.704 [2024-05-16 09:48:13.024548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.704 [2024-05-16 09:48:13.024564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.704 [2024-05-16 09:48:13.030591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.704 [2024-05-16 09:48:13.030940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.704 [2024-05-16 09:48:13.030958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.704 [2024-05-16 09:48:13.037209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.704 [2024-05-16 09:48:13.037489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.704 [2024-05-16 09:48:13.037506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.704 [2024-05-16 09:48:13.042347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.704 [2024-05-16 09:48:13.042547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.704 [2024-05-16 09:48:13.042563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.704 [2024-05-16 09:48:13.053600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.704 [2024-05-16 09:48:13.053944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.704 [2024-05-16 09:48:13.053961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.704 [2024-05-16 09:48:13.065484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.065839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.065856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.077318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.077688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.077705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.088713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.088959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.088976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.100311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.100561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.100577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.112023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.112365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.112382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.123857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.124173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.124194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.135168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.135396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.135412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.145722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.145963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.145979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.157787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.158094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.158112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.163789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.163989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.164005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.168910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.169124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.169140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.177246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.177623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.177640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.185092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.185394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.185412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.189772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.189972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.189988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.193849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.194047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.194069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.197986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.198187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.198203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.201903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.202108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.202124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.205953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.206155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.206171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.209896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.210102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.210118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.213983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.214190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.214206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.217984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.218186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.218203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.222262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.222461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.222477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.226290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.226488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.226507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.230291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.230491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.230507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.234174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.234371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.234387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.238233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.238434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.238450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.242301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.242499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.242515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.246396] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.246693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.246711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.252282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.252607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.252624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.705 [2024-05-16 09:48:13.257740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.705 [2024-05-16 09:48:13.258007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.705 [2024-05-16 09:48:13.258023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.266109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.266573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.266591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.274202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.274409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.274424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.282018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.282092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.282107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.290668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.290954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.290971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.297657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.297849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.297865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.303217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.303406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.303423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.310882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.311074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.311089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.315716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.315903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.315919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.322491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.322683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.322698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.330223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.330529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.330546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.335239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.335428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.335444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.341925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.342116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.342133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.346506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.346693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.346709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.355125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.355320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.355336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.362953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.363214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.363231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.371241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.371540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.371557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.376723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.376910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.376926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.380801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.380988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.381004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.384837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.385057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.385075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.389897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.390090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.390106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.393976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.394166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.394182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.400104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.400323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.400338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.404821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.404998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.405014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.412933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.413218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.413236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.418981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.419160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.419176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.423715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.423893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.423909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.429271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.429515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.429531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.435608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.435790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.966 [2024-05-16 09:48:13.435806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.966 [2024-05-16 09:48:13.442134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.966 [2024-05-16 09:48:13.442311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.442327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.967 [2024-05-16 09:48:13.448852] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.967 [2024-05-16 09:48:13.449128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.449146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.967 [2024-05-16 09:48:13.455590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.967 [2024-05-16 09:48:13.455769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.455785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.967 [2024-05-16 09:48:13.460699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.967 [2024-05-16 09:48:13.460875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.460891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.967 [2024-05-16 09:48:13.464382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.967 [2024-05-16 09:48:13.464559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.464575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.967 [2024-05-16 09:48:13.468221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.967 [2024-05-16 09:48:13.468399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.468414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.967 [2024-05-16 09:48:13.472099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.967 [2024-05-16 09:48:13.472277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.472292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.967 [2024-05-16 09:48:13.475940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.967 [2024-05-16 09:48:13.476119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.476135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.967 [2024-05-16 09:48:13.479824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.967 [2024-05-16 09:48:13.480000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.480016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.967 [2024-05-16 09:48:13.483566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.967 [2024-05-16 09:48:13.483743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.483759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.967 [2024-05-16 09:48:13.487169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.967 [2024-05-16 09:48:13.487348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.487363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.967 [2024-05-16 09:48:13.490838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.967 [2024-05-16 09:48:13.491014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.491030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.967 [2024-05-16 09:48:13.494427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.967 [2024-05-16 09:48:13.494603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.494619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.967 [2024-05-16 09:48:13.498083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.967 [2024-05-16 09:48:13.498260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.498276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.967 [2024-05-16 09:48:13.501722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.967 [2024-05-16 09:48:13.501899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.501914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.967 [2024-05-16 09:48:13.505725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.967 [2024-05-16 09:48:13.505900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.505916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:19.967 [2024-05-16 09:48:13.511278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.967 [2024-05-16 09:48:13.511484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.511502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.967 [2024-05-16 09:48:13.515033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.967 [2024-05-16 09:48:13.515215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.515230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.967 [2024-05-16 09:48:13.518839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:19.967 [2024-05-16 09:48:13.519016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:19.967 [2024-05-16 09:48:13.519032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.525075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.525414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.525431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.531429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.531625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.531641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.541525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.541708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.541723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.550929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.551284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.551302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.560757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.560934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.560950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.570989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.571312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.571329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.581734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.582071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.582088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.592382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.592649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.592667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.598797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.598975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.598991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.606415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.606594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.606610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.614876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.615055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.615072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.621719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.621895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.621912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.628354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.628533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.628549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.635842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.636005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.636020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.645938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.646278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.646299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.656782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.656996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.657013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.667225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.667623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.667640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.677264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.677544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.677560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.688306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.688559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.688577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.699107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.699496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.699512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.709263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.709456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.709472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.719904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.720131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.229 [2024-05-16 09:48:13.720147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.229 [2024-05-16 09:48:13.729980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.229 [2024-05-16 09:48:13.730184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.230 [2024-05-16 09:48:13.730200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.230 [2024-05-16 09:48:13.741023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.230 [2024-05-16 09:48:13.741466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.230 [2024-05-16 09:48:13.741484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.230 [2024-05-16 09:48:13.751833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.230 [2024-05-16 09:48:13.752012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.230 [2024-05-16 09:48:13.752029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.230 [2024-05-16 09:48:13.760893] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.230 [2024-05-16 09:48:13.761084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.230 [2024-05-16 09:48:13.761101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.230 [2024-05-16 09:48:13.771862] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.230 [2024-05-16 09:48:13.772138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.230 [2024-05-16 09:48:13.772156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.230 [2024-05-16 09:48:13.782635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.230 [2024-05-16 09:48:13.782829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.230 [2024-05-16 09:48:13.782845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.792636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.792851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.792867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.801407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.801678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.801696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.811478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.811712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.811730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.821324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.821619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.821636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.829899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.830118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.830134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.839988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.840215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.840231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.848796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.848971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.848988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.853176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.853352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.853368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.857033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.857226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.857242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.860992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.861179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.861195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.864886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.865067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.865083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.868820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.868998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.869013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.872921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.873102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.873121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.878911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.879094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.879109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.882700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.882877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.882893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.889898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.890071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.890088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.893707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.893877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.893893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.899801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.899972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.899988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.903313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.903483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.903500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.906943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.907119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.907135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.912582] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.912753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.912769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.917022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.917205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.917223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.920585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.920773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.920789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.924655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.924822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.924838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.930910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.491 [2024-05-16 09:48:13.931088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.491 [2024-05-16 09:48:13.931105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.491 [2024-05-16 09:48:13.934927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.492 [2024-05-16 09:48:13.935104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.492 [2024-05-16 09:48:13.935120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.492 [2024-05-16 09:48:13.941384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.492 [2024-05-16 09:48:13.941629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.492 [2024-05-16 09:48:13.941645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.492 [2024-05-16 09:48:13.949314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.492 [2024-05-16 09:48:13.949680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.492 [2024-05-16 09:48:13.949698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.492 [2024-05-16 09:48:13.959152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.492 [2024-05-16 09:48:13.959403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.492 [2024-05-16 09:48:13.959420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.492 [2024-05-16 09:48:13.969045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.492 [2024-05-16 09:48:13.969437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.492 [2024-05-16 09:48:13.969453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.492 [2024-05-16 09:48:13.979417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.492 [2024-05-16 09:48:13.979727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.492 [2024-05-16 09:48:13.979744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.492 [2024-05-16 09:48:13.989880] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.492 [2024-05-16 09:48:13.990181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.492 [2024-05-16 09:48:13.990197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.492 [2024-05-16 09:48:13.999911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.492 [2024-05-16 09:48:14.000171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.492 [2024-05-16 09:48:14.000186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.492 [2024-05-16 09:48:14.010459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.492 [2024-05-16 09:48:14.010662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.492 [2024-05-16 09:48:14.010677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.492 [2024-05-16 09:48:14.020536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.492 [2024-05-16 09:48:14.020821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.492 [2024-05-16 09:48:14.020836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.492 [2024-05-16 09:48:14.030432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.492 [2024-05-16 09:48:14.030672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.492 [2024-05-16 09:48:14.030687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.492 [2024-05-16 09:48:14.040706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.492 [2024-05-16 09:48:14.040993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.492 [2024-05-16 09:48:14.041008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.492 [2024-05-16 09:48:14.046907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.492 [2024-05-16 09:48:14.046976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.492 [2024-05-16 09:48:14.046991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.753 [2024-05-16 09:48:14.050962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.051047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.051069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.055382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.055466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.055481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.059555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.059673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.059688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.063830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.063899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.063913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.070030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.070102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.070117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.074044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.074158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.074173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.082841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.082997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.083012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.093121] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.093429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.093445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.103179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.103369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.103384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.113732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.114025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.114041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.123802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.123999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.124013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.129031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.129091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.129106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.132401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.132478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.132493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.135901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.135967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.135982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.139221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.139283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.139298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.145195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.145259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.145274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.148462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.148514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.148529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.151756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.151809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.151824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.155076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.155130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.155145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.158744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.158800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.158815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.162080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.162150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.162165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.168653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.168886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.168901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.172972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.173097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.173119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.179840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.179922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.179937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.184618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.754 [2024-05-16 09:48:14.184679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.754 [2024-05-16 09:48:14.184694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.754 [2024-05-16 09:48:14.188136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.188218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.188234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.193817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.193872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.193889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.197747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.197798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.197813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.201240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.201313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.201328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.204698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.204748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.204764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.209923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.209989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.210004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.213464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.213535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.213550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.216948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.217000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.217015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.220574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.220665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.220680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.228499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.228692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.228707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.235915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.236166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.236181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.243763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.243818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.243833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.247408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.247468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.247484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.251073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.251126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.251141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.255894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.255947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.255962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.260383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.260484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.260499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.267252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.267421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.267436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.277594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.277807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.277823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.286043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.286314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.286331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.295798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.296010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.296025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.755 [2024-05-16 09:48:14.305612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:20.755 [2024-05-16 09:48:14.305842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.755 [2024-05-16 09:48:14.305858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.019 [2024-05-16 09:48:14.314977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.019 [2024-05-16 09:48:14.315047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.019 [2024-05-16 09:48:14.315067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.019 [2024-05-16 09:48:14.323845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.019 [2024-05-16 09:48:14.324152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.019 [2024-05-16 09:48:14.324168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.019 [2024-05-16 09:48:14.332823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.019 [2024-05-16 09:48:14.333035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.019 [2024-05-16 09:48:14.333050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.339005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.339228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.339243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.345423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.345567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.345583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.351215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.351271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.351285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.355460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.355520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.355537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.361818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.361874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.361889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.370064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.370132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.370147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.374845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.374898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.374914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.379347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.379431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.379447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.383969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.384029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.384045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.388830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.388894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.388909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.392558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.392626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.392640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.400923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.401194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.401208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.406121] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.406236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.406251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.409805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.409925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.409940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.413737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.413815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.413830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.417347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.417435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.417450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.421100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.421178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.421193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.424716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.424798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.424813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.428371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.428508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.428524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.432030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.432133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.432148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.435577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.435629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.435644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.442382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.442461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.442476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.446078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.446218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.020 [2024-05-16 09:48:14.446233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.020 [2024-05-16 09:48:14.449777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.020 [2024-05-16 09:48:14.449828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.449844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.453685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.453758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.453774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.457558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.457620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.457635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.461707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.462012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.462029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.465654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.465706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.465721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.468923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.468975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.468990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.472193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.472248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.472266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.475590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.475642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.475657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.478889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.478948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.478963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.482148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.482208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.482223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.485487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.485576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.485592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.489209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.489292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.489306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.492908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.493008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.493023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.498891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.499144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.499159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.506085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.506151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.506167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.509521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.509578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.509594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.513125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.513192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.513207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.516704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.516754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.516769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.520309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.520360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.520375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.523908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.523977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.523992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.529173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.529233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.529248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.535815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.535886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.535901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.540264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.540327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.540342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.544112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.544203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.544219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.548372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.548443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.548458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.551926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.021 [2024-05-16 09:48:14.551986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.021 [2024-05-16 09:48:14.552001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.021 [2024-05-16 09:48:14.555679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.022 [2024-05-16 09:48:14.555750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.022 [2024-05-16 09:48:14.555766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.022 [2024-05-16 09:48:14.560592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.022 [2024-05-16 09:48:14.560844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.022 [2024-05-16 09:48:14.560861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.022 [2024-05-16 09:48:14.565216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.022 [2024-05-16 09:48:14.565333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.022 [2024-05-16 09:48:14.565348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.022 [2024-05-16 09:48:14.572430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.022 [2024-05-16 09:48:14.572492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.022 [2024-05-16 09:48:14.572508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.022 [2024-05-16 09:48:14.575716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.022 [2024-05-16 09:48:14.575779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.022 [2024-05-16 09:48:14.575794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.284 [2024-05-16 09:48:14.579425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.284 [2024-05-16 09:48:14.579514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.284 [2024-05-16 09:48:14.579529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.284 [2024-05-16 09:48:14.585352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.585565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.585585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.593552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.593674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.593689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.597317] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.597424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.597439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.601039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.601133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.601148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.604695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.604813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.604829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.608504] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.608595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.608610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.612288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.612411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.612426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.615617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.615726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.615742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.619300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.619351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.619367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.622889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.622958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.622973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.626544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.626595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.626610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.630333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.630397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.630413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.633757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.633820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.633835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.637007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.637073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.637088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.640264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.640336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.640350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.643510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.643566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.643581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.646760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.646814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.646829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.650041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.650107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.650122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.653287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.653338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.653352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.656529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.656590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.656606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.659770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.659834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.659849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.663018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.663080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.663095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.666249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.666310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.666325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.669472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.669523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.669538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.672694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.672755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.285 [2024-05-16 09:48:14.672770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.285 [2024-05-16 09:48:14.675930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.285 [2024-05-16 09:48:14.675990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.676005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.679165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.679221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.679238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.682360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.682420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.682434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.685562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.685619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.685635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.688781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.688845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.688860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.692605] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.692657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.692672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.699160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.699589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.699605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.704004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.704074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.704089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.707671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.707733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.707748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.712531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.712780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.712797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.717172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.717239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.717254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.720569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.720640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.720655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.724490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.724586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.724601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.731810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.731942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.731957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.739829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.739895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.739911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.744023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.744123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.744139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.748375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.748429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.748444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.752510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.752564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.752579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.759410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.759660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.759677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.764933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.765018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.765033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.768674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.768728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.768743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.774970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.775026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.775041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.784020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.784096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.784111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.792088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.792171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.792186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.799008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.799428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.799445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.286 [2024-05-16 09:48:14.805926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.286 [2024-05-16 09:48:14.806020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.286 [2024-05-16 09:48:14.806035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.287 [2024-05-16 09:48:14.810439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.287 [2024-05-16 09:48:14.810492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.287 [2024-05-16 09:48:14.810507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.287 [2024-05-16 09:48:14.814728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.287 [2024-05-16 09:48:14.814780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.287 [2024-05-16 09:48:14.814797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.287 [2024-05-16 09:48:14.820909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.287 [2024-05-16 09:48:14.820964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.287 [2024-05-16 09:48:14.820979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.287 [2024-05-16 09:48:14.827265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.287 [2024-05-16 09:48:14.827358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.287 [2024-05-16 09:48:14.827373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.287 [2024-05-16 09:48:14.831618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.287 [2024-05-16 09:48:14.831670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.287 [2024-05-16 09:48:14.831685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.287 [2024-05-16 09:48:14.835778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.287 [2024-05-16 09:48:14.835830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.287 [2024-05-16 09:48:14.835845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.287 [2024-05-16 09:48:14.841973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.287 [2024-05-16 09:48:14.842043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.287 [2024-05-16 09:48:14.842064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.548 [2024-05-16 09:48:14.847885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.548 [2024-05-16 09:48:14.847945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.548 [2024-05-16 09:48:14.847960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.548 [2024-05-16 09:48:14.852382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.548 [2024-05-16 09:48:14.852438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.548 [2024-05-16 09:48:14.852453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.548 [2024-05-16 09:48:14.856711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.548 [2024-05-16 09:48:14.856764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.548 [2024-05-16 09:48:14.856781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.548 [2024-05-16 09:48:14.862428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.548 [2024-05-16 09:48:14.862526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.548 [2024-05-16 09:48:14.862542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.548 [2024-05-16 09:48:14.870060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.548 [2024-05-16 09:48:14.870302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.548 [2024-05-16 09:48:14.870318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.549 [2024-05-16 09:48:14.876037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.549 [2024-05-16 09:48:14.876130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.549 [2024-05-16 09:48:14.876146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.549 [2024-05-16 09:48:14.880551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.549 [2024-05-16 09:48:14.880723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.549 [2024-05-16 09:48:14.880738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.549 [2024-05-16 09:48:14.889256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.549 [2024-05-16 09:48:14.889524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.549 [2024-05-16 09:48:14.889541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.549 [2024-05-16 09:48:14.894434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.549 [2024-05-16 09:48:14.894484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.549 [2024-05-16 09:48:14.894499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.549 [2024-05-16 09:48:14.899387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21f65d0) with pdu=0x2000190fef90 00:35:21.549 [2024-05-16 09:48:14.899441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.549 [2024-05-16 09:48:14.899455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.549 00:35:21.549 Latency(us) 00:35:21.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.549 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:21.549 nvme0n1 : 2.00 5180.61 647.58 0.00 0.00 3084.16 1385.81 11960.32 00:35:21.549 =================================================================================================================== 00:35:21.549 Total : 5180.61 647.58 0.00 0.00 3084.16 1385.81 11960.32 00:35:21.549 0 00:35:21.549 09:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:21.549 09:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:21.549 09:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:21.549 | .driver_specific 00:35:21.549 | .nvme_error 00:35:21.549 | .status_code 00:35:21.549 | .command_transient_transport_error' 00:35:21.549 09:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:21.549 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 334 > 0 )) 00:35:21.549 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 526142 00:35:21.549 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 526142 ']' 00:35:21.549 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 526142 00:35:21.549 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:35:21.549 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:21.549 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 526142 00:35:21.810 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:21.810 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:21.811 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 526142' 00:35:21.811 killing process with pid 526142 00:35:21.811 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 526142 00:35:21.811 Received shutdown signal, test time was about 2.000000 seconds 00:35:21.811 00:35:21.811 Latency(us) 00:35:21.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.811 =================================================================================================================== 00:35:21.811 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:21.811 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 526142 00:35:21.811 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 523173 00:35:21.811 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 523173 ']' 00:35:21.811 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 523173 00:35:21.811 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:35:21.811 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:21.811 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 523173 00:35:21.811 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:21.811 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:21.811 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 523173' 00:35:21.811 killing process with pid 523173 00:35:21.811 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 523173 00:35:21.811 [2024-05-16 09:48:15.308062] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:21.811 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 523173 00:35:22.072 00:35:22.072 real 0m16.048s 00:35:22.072 user 0m31.608s 00:35:22.072 sys 0m3.350s 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.072 ************************************ 00:35:22.072 END TEST nvmf_digest_error 00:35:22.072 ************************************ 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:22.072 rmmod nvme_tcp 00:35:22.072 rmmod nvme_fabrics 00:35:22.072 rmmod nvme_keyring 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 523173 ']' 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 523173 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 523173 ']' 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 523173 00:35:22.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (523173) - No such process 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 523173 is not found' 00:35:22.072 Process with pid 523173 is not found 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:22.072 09:48:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.615 09:48:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:24.615 00:35:24.615 real 0m41.817s 00:35:24.615 user 1m5.488s 00:35:24.615 sys 0m12.149s 00:35:24.615 09:48:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:24.615 09:48:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:24.615 ************************************ 00:35:24.615 END TEST nvmf_digest 00:35:24.615 ************************************ 00:35:24.615 09:48:17 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:35:24.615 09:48:17 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:35:24.615 09:48:17 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:35:24.615 09:48:17 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:24.615 09:48:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:24.615 09:48:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:24.615 09:48:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:24.615 ************************************ 00:35:24.615 START TEST nvmf_bdevperf 00:35:24.615 ************************************ 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:24.615 * Looking for test storage... 00:35:24.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:35:24.615 09:48:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:31.204 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:31.204 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:31.204 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:31.204 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:31.204 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:31.466 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:31.466 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:31.466 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:31.466 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:31.466 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:31.466 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:31.466 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:31.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:31.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:35:31.466 00:35:31.466 --- 10.0.0.2 ping statistics --- 00:35:31.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.466 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:35:31.466 09:48:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:31.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:31.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:35:31.466 00:35:31.466 --- 10.0.0.1 ping statistics --- 00:35:31.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.466 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:35:31.466 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:31.466 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:35:31.466 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:31.466 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:31.466 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:31.466 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:31.466 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:31.466 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:31.466 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:31.727 09:48:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:31.727 09:48:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:31.727 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:31.727 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:31.727 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.727 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=530856 00:35:31.727 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 530856 00:35:31.727 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:31.727 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 530856 ']' 00:35:31.727 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:31.727 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:31.727 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:31.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:31.727 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:31.727 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:31.727 [2024-05-16 09:48:25.099228] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:35:31.727 [2024-05-16 09:48:25.099288] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:31.727 EAL: No free 2048 kB hugepages reported on node 1 00:35:31.727 [2024-05-16 09:48:25.188455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:31.727 [2024-05-16 09:48:25.283404] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:31.728 [2024-05-16 09:48:25.283465] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:31.728 [2024-05-16 09:48:25.283474] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:31.728 [2024-05-16 09:48:25.283482] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:31.728 [2024-05-16 09:48:25.283489] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:31.728 [2024-05-16 09:48:25.283632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:31.728 [2024-05-16 09:48:25.283797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:31.728 [2024-05-16 09:48:25.283798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:32.671 [2024-05-16 09:48:25.925727] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:32.671 Malloc0 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:32.671 [2024-05-16 09:48:25.990201] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:32.671 [2024-05-16 09:48:25.990415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:32.671 { 00:35:32.671 "params": { 00:35:32.671 "name": "Nvme$subsystem", 00:35:32.671 "trtype": "$TEST_TRANSPORT", 00:35:32.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:32.671 "adrfam": "ipv4", 00:35:32.671 "trsvcid": "$NVMF_PORT", 00:35:32.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:32.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:32.671 "hdgst": ${hdgst:-false}, 00:35:32.671 "ddgst": ${ddgst:-false} 00:35:32.671 }, 00:35:32.671 "method": "bdev_nvme_attach_controller" 00:35:32.671 } 00:35:32.671 EOF 00:35:32.671 )") 00:35:32.671 09:48:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:35:32.671 09:48:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:35:32.671 09:48:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:35:32.671 09:48:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:32.671 "params": { 00:35:32.671 "name": "Nvme1", 00:35:32.671 "trtype": "tcp", 00:35:32.671 "traddr": "10.0.0.2", 00:35:32.671 "adrfam": "ipv4", 00:35:32.671 "trsvcid": "4420", 00:35:32.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:32.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:32.671 "hdgst": false, 00:35:32.671 "ddgst": false 00:35:32.671 }, 00:35:32.671 "method": "bdev_nvme_attach_controller" 00:35:32.671 }' 00:35:32.671 [2024-05-16 09:48:26.044888] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:35:32.671 [2024-05-16 09:48:26.044935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531184 ] 00:35:32.671 EAL: No free 2048 kB hugepages reported on node 1 00:35:32.671 [2024-05-16 09:48:26.102783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.671 [2024-05-16 09:48:26.167119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:32.932 Running I/O for 1 seconds... 00:35:34.316 00:35:34.316 Latency(us) 00:35:34.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:34.316 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:34.316 Verification LBA range: start 0x0 length 0x4000 00:35:34.316 Nvme1n1 : 1.01 9089.51 35.51 0.00 0.00 14020.47 2375.68 15073.28 00:35:34.316 =================================================================================================================== 00:35:34.316 Total : 9089.51 35.51 0.00 0.00 14020.47 2375.68 15073.28 00:35:34.316 09:48:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=531526 00:35:34.316 09:48:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:34.316 09:48:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:34.316 09:48:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:34.316 09:48:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:35:34.316 09:48:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:35:34.316 09:48:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:34.316 09:48:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:34.316 { 00:35:34.316 "params": { 00:35:34.316 "name": "Nvme$subsystem", 00:35:34.316 "trtype": "$TEST_TRANSPORT", 00:35:34.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.316 "adrfam": "ipv4", 00:35:34.316 "trsvcid": "$NVMF_PORT", 00:35:34.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.316 "hdgst": ${hdgst:-false}, 00:35:34.316 "ddgst": ${ddgst:-false} 00:35:34.316 }, 00:35:34.316 "method": "bdev_nvme_attach_controller" 00:35:34.316 } 00:35:34.316 EOF 00:35:34.316 )") 00:35:34.316 09:48:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:35:34.316 09:48:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:35:34.316 09:48:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:35:34.316 09:48:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:34.316 "params": { 00:35:34.316 "name": "Nvme1", 00:35:34.316 "trtype": "tcp", 00:35:34.316 "traddr": "10.0.0.2", 00:35:34.316 "adrfam": "ipv4", 00:35:34.316 "trsvcid": "4420", 00:35:34.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:34.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:34.316 "hdgst": false, 00:35:34.316 "ddgst": false 00:35:34.316 }, 00:35:34.316 "method": "bdev_nvme_attach_controller" 00:35:34.316 }' 00:35:34.316 [2024-05-16 09:48:27.625446] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:35:34.316 [2024-05-16 09:48:27.625501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531526 ] 00:35:34.316 EAL: No free 2048 kB hugepages reported on node 1 00:35:34.316 [2024-05-16 09:48:27.683720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.316 [2024-05-16 09:48:27.747240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:34.576 Running I/O for 15 seconds... 00:35:37.121 09:48:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 530856 00:35:37.121 09:48:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:37.121 [2024-05-16 09:48:30.599791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.121 [2024-05-16 09:48:30.599838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.121 [2024-05-16 09:48:30.599861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.121 [2024-05-16 09:48:30.599871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.121 [2024-05-16 09:48:30.599882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.121 [2024-05-16 09:48:30.599891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.121 [2024-05-16 09:48:30.599904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.121 [2024-05-16 09:48:30.599911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.121 [2024-05-16 09:48:30.599921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.121 [2024-05-16 09:48:30.599929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.121 [2024-05-16 09:48:30.599938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.121 [2024-05-16 09:48:30.599946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.121 [2024-05-16 09:48:30.599958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.121 [2024-05-16 09:48:30.599967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.121 [2024-05-16 09:48:30.599981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.121 [2024-05-16 09:48:30.599991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.121 [2024-05-16 09:48:30.600000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.121 [2024-05-16 09:48:30.600009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.121 [2024-05-16 09:48:30.600021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.121 [2024-05-16 09:48:30.600029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.121 [2024-05-16 09:48:30.600041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.121 [2024-05-16 09:48:30.600049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.121 [2024-05-16 09:48:30.600067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.121 [2024-05-16 09:48:30.600075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.122 [2024-05-16 09:48:30.600927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.122 [2024-05-16 09:48:30.600937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.600951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.123 [2024-05-16 09:48:30.600961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.600972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.123 [2024-05-16 09:48:30.600985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.600994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.123 [2024-05-16 09:48:30.601001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.123 [2024-05-16 09:48:30.601025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.123 [2024-05-16 09:48:30.601047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.123 [2024-05-16 09:48:30.601327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.123 [2024-05-16 09:48:30.601595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.123 [2024-05-16 09:48:30.601612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.123 [2024-05-16 09:48:30.601629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.123 [2024-05-16 09:48:30.601645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.123 [2024-05-16 09:48:30.601654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.124 [2024-05-16 09:48:30.601662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.124 [2024-05-16 09:48:30.601678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.124 [2024-05-16 09:48:30.601696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.601714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.601730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.601747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.601764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.601781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.601797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.601814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.601832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.601848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.601864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.601882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.601900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.601916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.601933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.601949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.601965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.601982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.601991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.601999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.602008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.602015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.602024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.602032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.602042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.602050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.602064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.602072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.602081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.602088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.602098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:37.124 [2024-05-16 09:48:30.602105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.602116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.602123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.602132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.602139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.602150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.602158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.602168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.602174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.602184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.602191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.602201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.602208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.602218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:37.124 [2024-05-16 09:48:30.602225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.602233] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2396d30 is same with the state(5) to be set 00:35:37.124 [2024-05-16 09:48:30.602242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:37.124 [2024-05-16 09:48:30.602247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:37.124 [2024-05-16 09:48:30.602254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107104 len:8 PRP1 0x0 PRP2 0x0 00:35:37.124 [2024-05-16 09:48:30.602262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:37.124 [2024-05-16 09:48:30.602299] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2396d30 was disconnected and freed. reset controller. 00:35:37.124 [2024-05-16 09:48:30.605898] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.124 [2024-05-16 09:48:30.605946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.124 [2024-05-16 09:48:30.606619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.124 [2024-05-16 09:48:30.606916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.124 [2024-05-16 09:48:30.606927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.124 [2024-05-16 09:48:30.606935] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.124 [2024-05-16 09:48:30.607163] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.124 [2024-05-16 09:48:30.607388] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.124 [2024-05-16 09:48:30.607397] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.124 [2024-05-16 09:48:30.607405] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.125 [2024-05-16 09:48:30.610956] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.125 [2024-05-16 09:48:30.619963] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.125 [2024-05-16 09:48:30.620615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.125 [2024-05-16 09:48:30.620990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.125 [2024-05-16 09:48:30.621004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.125 [2024-05-16 09:48:30.621014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.125 [2024-05-16 09:48:30.621262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.125 [2024-05-16 09:48:30.621486] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.125 [2024-05-16 09:48:30.621495] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.125 [2024-05-16 09:48:30.621503] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.125 [2024-05-16 09:48:30.625062] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.125 [2024-05-16 09:48:30.633875] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.125 [2024-05-16 09:48:30.634511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.125 [2024-05-16 09:48:30.634758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.125 [2024-05-16 09:48:30.634771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.125 [2024-05-16 09:48:30.634781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.125 [2024-05-16 09:48:30.635021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.125 [2024-05-16 09:48:30.635253] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.125 [2024-05-16 09:48:30.635263] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.125 [2024-05-16 09:48:30.635270] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.125 [2024-05-16 09:48:30.638822] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.125 [2024-05-16 09:48:30.647822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.125 [2024-05-16 09:48:30.648386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.125 [2024-05-16 09:48:30.648744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.125 [2024-05-16 09:48:30.648755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.125 [2024-05-16 09:48:30.648762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.125 [2024-05-16 09:48:30.648982] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.125 [2024-05-16 09:48:30.649208] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.125 [2024-05-16 09:48:30.649222] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.125 [2024-05-16 09:48:30.649229] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.125 [2024-05-16 09:48:30.652773] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.125 [2024-05-16 09:48:30.661766] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.125 [2024-05-16 09:48:30.662309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.125 [2024-05-16 09:48:30.662619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.125 [2024-05-16 09:48:30.662630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.125 [2024-05-16 09:48:30.662638] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.125 [2024-05-16 09:48:30.662857] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.125 [2024-05-16 09:48:30.663080] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.125 [2024-05-16 09:48:30.663090] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.125 [2024-05-16 09:48:30.663097] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.125 [2024-05-16 09:48:30.666642] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.125 [2024-05-16 09:48:30.675643] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.125 [2024-05-16 09:48:30.676191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.125 [2024-05-16 09:48:30.676389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.125 [2024-05-16 09:48:30.676400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.125 [2024-05-16 09:48:30.676407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.125 [2024-05-16 09:48:30.676626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.125 [2024-05-16 09:48:30.676846] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.125 [2024-05-16 09:48:30.676854] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.125 [2024-05-16 09:48:30.676861] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.387 [2024-05-16 09:48:30.680412] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.387 [2024-05-16 09:48:30.689631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.387 [2024-05-16 09:48:30.690186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.387 [2024-05-16 09:48:30.690550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.387 [2024-05-16 09:48:30.690564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.387 [2024-05-16 09:48:30.690574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.387 [2024-05-16 09:48:30.690813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.387 [2024-05-16 09:48:30.691036] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.387 [2024-05-16 09:48:30.691045] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.387 [2024-05-16 09:48:30.691064] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.387 [2024-05-16 09:48:30.694617] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.387 [2024-05-16 09:48:30.703610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.387 [2024-05-16 09:48:30.704248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.387 [2024-05-16 09:48:30.704612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.387 [2024-05-16 09:48:30.704626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.387 [2024-05-16 09:48:30.704635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.387 [2024-05-16 09:48:30.704874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.387 [2024-05-16 09:48:30.705105] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.387 [2024-05-16 09:48:30.705115] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.387 [2024-05-16 09:48:30.705123] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.387 [2024-05-16 09:48:30.708671] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.387 [2024-05-16 09:48:30.717452] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.387 [2024-05-16 09:48:30.718140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.387 [2024-05-16 09:48:30.718498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.387 [2024-05-16 09:48:30.718512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.387 [2024-05-16 09:48:30.718522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.387 [2024-05-16 09:48:30.718761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.387 [2024-05-16 09:48:30.718984] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.387 [2024-05-16 09:48:30.718993] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.387 [2024-05-16 09:48:30.719000] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.387 [2024-05-16 09:48:30.722555] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.387 [2024-05-16 09:48:30.731359] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.387 [2024-05-16 09:48:30.732007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.387 [2024-05-16 09:48:30.732344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.387 [2024-05-16 09:48:30.732358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.387 [2024-05-16 09:48:30.732368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.387 [2024-05-16 09:48:30.732607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.387 [2024-05-16 09:48:30.732830] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.387 [2024-05-16 09:48:30.732840] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.387 [2024-05-16 09:48:30.732848] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.387 [2024-05-16 09:48:30.736404] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.387 [2024-05-16 09:48:30.745193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.387 [2024-05-16 09:48:30.745630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.387 [2024-05-16 09:48:30.745993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.387 [2024-05-16 09:48:30.746003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.387 [2024-05-16 09:48:30.746011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.387 [2024-05-16 09:48:30.746235] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.387 [2024-05-16 09:48:30.746455] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.387 [2024-05-16 09:48:30.746465] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.387 [2024-05-16 09:48:30.746472] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.387 [2024-05-16 09:48:30.750013] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.387 [2024-05-16 09:48:30.759000] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.387 [2024-05-16 09:48:30.759658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.387 [2024-05-16 09:48:30.759986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.387 [2024-05-16 09:48:30.759999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.387 [2024-05-16 09:48:30.760009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.387 [2024-05-16 09:48:30.760255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.387 [2024-05-16 09:48:30.760478] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.387 [2024-05-16 09:48:30.760488] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.387 [2024-05-16 09:48:30.760497] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.387 [2024-05-16 09:48:30.764043] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.387 [2024-05-16 09:48:30.772828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.387 [2024-05-16 09:48:30.773382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.387 [2024-05-16 09:48:30.773699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.387 [2024-05-16 09:48:30.773711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.387 [2024-05-16 09:48:30.773719] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.387 [2024-05-16 09:48:30.773939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.387 [2024-05-16 09:48:30.774165] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.387 [2024-05-16 09:48:30.774175] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.387 [2024-05-16 09:48:30.774182] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.387 [2024-05-16 09:48:30.777722] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.387 [2024-05-16 09:48:30.786754] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.387 [2024-05-16 09:48:30.787296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.387 [2024-05-16 09:48:30.787599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.387 [2024-05-16 09:48:30.787610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.387 [2024-05-16 09:48:30.787619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.387 [2024-05-16 09:48:30.787839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.387 [2024-05-16 09:48:30.788062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.387 [2024-05-16 09:48:30.788072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.387 [2024-05-16 09:48:30.788079] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.387 [2024-05-16 09:48:30.791621] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.387 [2024-05-16 09:48:30.800614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.388 [2024-05-16 09:48:30.801155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.801461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.801472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.388 [2024-05-16 09:48:30.801480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.388 [2024-05-16 09:48:30.801699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.388 [2024-05-16 09:48:30.801918] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.388 [2024-05-16 09:48:30.801928] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.388 [2024-05-16 09:48:30.801935] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.388 [2024-05-16 09:48:30.805480] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.388 [2024-05-16 09:48:30.814472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.388 [2024-05-16 09:48:30.815084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.815424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.815437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.388 [2024-05-16 09:48:30.815447] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.388 [2024-05-16 09:48:30.815686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.388 [2024-05-16 09:48:30.815908] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.388 [2024-05-16 09:48:30.815918] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.388 [2024-05-16 09:48:30.815926] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.388 [2024-05-16 09:48:30.819476] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.388 [2024-05-16 09:48:30.828470] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.388 [2024-05-16 09:48:30.829019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.829338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.829352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.388 [2024-05-16 09:48:30.829360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.388 [2024-05-16 09:48:30.829580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.388 [2024-05-16 09:48:30.829799] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.388 [2024-05-16 09:48:30.829808] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.388 [2024-05-16 09:48:30.829815] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.388 [2024-05-16 09:48:30.833379] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.388 [2024-05-16 09:48:30.842374] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.388 [2024-05-16 09:48:30.843039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.843383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.843397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.388 [2024-05-16 09:48:30.843406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.388 [2024-05-16 09:48:30.843645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.388 [2024-05-16 09:48:30.843868] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.388 [2024-05-16 09:48:30.843877] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.388 [2024-05-16 09:48:30.843885] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.388 [2024-05-16 09:48:30.847437] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.388 [2024-05-16 09:48:30.856232] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.388 [2024-05-16 09:48:30.856746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.857066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.857078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.388 [2024-05-16 09:48:30.857086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.388 [2024-05-16 09:48:30.857305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.388 [2024-05-16 09:48:30.857525] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.388 [2024-05-16 09:48:30.857534] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.388 [2024-05-16 09:48:30.857541] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.388 [2024-05-16 09:48:30.861130] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.388 [2024-05-16 09:48:30.870137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.388 [2024-05-16 09:48:30.870673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.871003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.871020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.388 [2024-05-16 09:48:30.871030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.388 [2024-05-16 09:48:30.871277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.388 [2024-05-16 09:48:30.871501] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.388 [2024-05-16 09:48:30.871510] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.388 [2024-05-16 09:48:30.871518] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.388 [2024-05-16 09:48:30.875074] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.388 [2024-05-16 09:48:30.884089] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.388 [2024-05-16 09:48:30.884665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.884977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.884988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.388 [2024-05-16 09:48:30.884996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.388 [2024-05-16 09:48:30.885222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.388 [2024-05-16 09:48:30.885442] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.388 [2024-05-16 09:48:30.885451] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.388 [2024-05-16 09:48:30.885458] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.388 [2024-05-16 09:48:30.889007] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.388 [2024-05-16 09:48:30.898013] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.388 [2024-05-16 09:48:30.898575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.898908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.898919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.388 [2024-05-16 09:48:30.898927] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.388 [2024-05-16 09:48:30.899151] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.388 [2024-05-16 09:48:30.899371] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.388 [2024-05-16 09:48:30.899381] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.388 [2024-05-16 09:48:30.899388] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.388 [2024-05-16 09:48:30.902973] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.388 [2024-05-16 09:48:30.911979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.388 [2024-05-16 09:48:30.912407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.912697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.912708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.388 [2024-05-16 09:48:30.912722] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.388 [2024-05-16 09:48:30.912942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.388 [2024-05-16 09:48:30.913168] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.388 [2024-05-16 09:48:30.913178] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.388 [2024-05-16 09:48:30.913185] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.388 [2024-05-16 09:48:30.916734] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.388 [2024-05-16 09:48:30.925945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.388 [2024-05-16 09:48:30.926465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.926805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.926817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.388 [2024-05-16 09:48:30.926824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.388 [2024-05-16 09:48:30.927043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.388 [2024-05-16 09:48:30.927270] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.388 [2024-05-16 09:48:30.927279] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.388 [2024-05-16 09:48:30.927287] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.388 [2024-05-16 09:48:30.930849] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.388 [2024-05-16 09:48:30.939855] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.388 [2024-05-16 09:48:30.940419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.940753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.388 [2024-05-16 09:48:30.940764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.388 [2024-05-16 09:48:30.940772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.388 [2024-05-16 09:48:30.940990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.388 [2024-05-16 09:48:30.941215] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.388 [2024-05-16 09:48:30.941224] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.389 [2024-05-16 09:48:30.941231] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.389 [2024-05-16 09:48:30.944777] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.651 [2024-05-16 09:48:30.953782] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.651 [2024-05-16 09:48:30.954307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.651 [2024-05-16 09:48:30.954617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.651 [2024-05-16 09:48:30.954628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.651 [2024-05-16 09:48:30.954635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.651 [2024-05-16 09:48:30.954859] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.651 [2024-05-16 09:48:30.955083] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.651 [2024-05-16 09:48:30.955093] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.651 [2024-05-16 09:48:30.955100] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.651 [2024-05-16 09:48:30.958649] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.651 [2024-05-16 09:48:30.967648] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.651 [2024-05-16 09:48:30.968187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.651 [2024-05-16 09:48:30.968561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.651 [2024-05-16 09:48:30.968575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.651 [2024-05-16 09:48:30.968585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.651 [2024-05-16 09:48:30.968824] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.651 [2024-05-16 09:48:30.969046] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.651 [2024-05-16 09:48:30.969065] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.651 [2024-05-16 09:48:30.969073] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.651 [2024-05-16 09:48:30.972619] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.651 [2024-05-16 09:48:30.981627] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.651 [2024-05-16 09:48:30.982246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.651 [2024-05-16 09:48:30.982620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.651 [2024-05-16 09:48:30.982633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.651 [2024-05-16 09:48:30.982643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.651 [2024-05-16 09:48:30.982881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.651 [2024-05-16 09:48:30.983113] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.651 [2024-05-16 09:48:30.983124] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.651 [2024-05-16 09:48:30.983131] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.651 [2024-05-16 09:48:30.986681] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.651 [2024-05-16 09:48:30.995493] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.651 [2024-05-16 09:48:30.996156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.651 [2024-05-16 09:48:30.996532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.651 [2024-05-16 09:48:30.996546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.651 [2024-05-16 09:48:30.996556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.651 [2024-05-16 09:48:30.996794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.651 [2024-05-16 09:48:30.997023] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.651 [2024-05-16 09:48:30.997033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.651 [2024-05-16 09:48:30.997040] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.651 [2024-05-16 09:48:31.000596] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.651 [2024-05-16 09:48:31.009377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.651 [2024-05-16 09:48:31.010034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.651 [2024-05-16 09:48:31.010379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.651 [2024-05-16 09:48:31.010393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.651 [2024-05-16 09:48:31.010403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.651 [2024-05-16 09:48:31.010641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.651 [2024-05-16 09:48:31.010864] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.651 [2024-05-16 09:48:31.010874] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.651 [2024-05-16 09:48:31.010882] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.651 [2024-05-16 09:48:31.014434] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.651 [2024-05-16 09:48:31.023214] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.651 [2024-05-16 09:48:31.023890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.651 [2024-05-16 09:48:31.024226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.651 [2024-05-16 09:48:31.024242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.651 [2024-05-16 09:48:31.024251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.651 [2024-05-16 09:48:31.024490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.651 [2024-05-16 09:48:31.024713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.651 [2024-05-16 09:48:31.024722] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.651 [2024-05-16 09:48:31.024730] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.651 [2024-05-16 09:48:31.028280] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.651 [2024-05-16 09:48:31.037078] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.651 [2024-05-16 09:48:31.037741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.651 [2024-05-16 09:48:31.038078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.651 [2024-05-16 09:48:31.038093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.651 [2024-05-16 09:48:31.038103] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.651 [2024-05-16 09:48:31.038342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.651 [2024-05-16 09:48:31.038564] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.651 [2024-05-16 09:48:31.038578] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.651 [2024-05-16 09:48:31.038585] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.651 [2024-05-16 09:48:31.042139] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.651 [2024-05-16 09:48:31.050927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.651 [2024-05-16 09:48:31.051539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.652 [2024-05-16 09:48:31.051872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.652 [2024-05-16 09:48:31.051887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.652 [2024-05-16 09:48:31.051896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.652 [2024-05-16 09:48:31.052144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.652 [2024-05-16 09:48:31.052369] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.652 [2024-05-16 09:48:31.052377] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.652 [2024-05-16 09:48:31.052385] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.652 [2024-05-16 09:48:31.055936] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.652 [2024-05-16 09:48:31.064716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.652 [2024-05-16 09:48:31.065408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.652 [2024-05-16 09:48:31.065779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.652 [2024-05-16 09:48:31.065793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.652 [2024-05-16 09:48:31.065802] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.652 [2024-05-16 09:48:31.066041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.652 [2024-05-16 09:48:31.066274] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.652 [2024-05-16 09:48:31.066284] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.652 [2024-05-16 09:48:31.066292] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.652 [2024-05-16 09:48:31.069840] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.652 [2024-05-16 09:48:31.078617] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.652 [2024-05-16 09:48:31.079180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.652 [2024-05-16 09:48:31.079552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.652 [2024-05-16 09:48:31.079566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.652 [2024-05-16 09:48:31.079575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.652 [2024-05-16 09:48:31.079814] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.652 [2024-05-16 09:48:31.080037] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.652 [2024-05-16 09:48:31.080046] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.652 [2024-05-16 09:48:31.080067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.652 [2024-05-16 09:48:31.083620] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.652 [2024-05-16 09:48:31.092612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.652 [2024-05-16 09:48:31.093239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.652 [2024-05-16 09:48:31.093610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.652 [2024-05-16 09:48:31.093624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.652 [2024-05-16 09:48:31.093634] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.652 [2024-05-16 09:48:31.093872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.652 [2024-05-16 09:48:31.094105] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.652 [2024-05-16 09:48:31.094114] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.652 [2024-05-16 09:48:31.094122] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.652 [2024-05-16 09:48:31.097666] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.652 [2024-05-16 09:48:31.106448] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.652 [2024-05-16 09:48:31.107095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.652 [2024-05-16 09:48:31.107442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.652 [2024-05-16 09:48:31.107456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.652 [2024-05-16 09:48:31.107466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.652 [2024-05-16 09:48:31.107704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.652 [2024-05-16 09:48:31.107928] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.652 [2024-05-16 09:48:31.107938] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.652 [2024-05-16 09:48:31.107945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.652 [2024-05-16 09:48:31.111510] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.652 [2024-05-16 09:48:31.120298] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.652 [2024-05-16 09:48:31.120850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.652 [2024-05-16 09:48:31.121190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.652 [2024-05-16 09:48:31.121207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.652 [2024-05-16 09:48:31.121216] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.652 [2024-05-16 09:48:31.121455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.652 [2024-05-16 09:48:31.121678] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.652 [2024-05-16 09:48:31.121687] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.652 [2024-05-16 09:48:31.121694] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.652 [2024-05-16 09:48:31.125251] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.652 [2024-05-16 09:48:31.134275] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.652 [2024-05-16 09:48:31.134842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.652 [2024-05-16 09:48:31.135163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.652 [2024-05-16 09:48:31.135175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.652 [2024-05-16 09:48:31.135183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.652 [2024-05-16 09:48:31.135403] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.652 [2024-05-16 09:48:31.135623] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.652 [2024-05-16 09:48:31.135632] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.652 [2024-05-16 09:48:31.135639] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.652 [2024-05-16 09:48:31.139185] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.652 [2024-05-16 09:48:31.148175] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.652 [2024-05-16 09:48:31.148783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.652 [2024-05-16 09:48:31.149003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.652 [2024-05-16 09:48:31.149019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.652 [2024-05-16 09:48:31.149028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.652 [2024-05-16 09:48:31.149275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.652 [2024-05-16 09:48:31.149500] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.652 [2024-05-16 09:48:31.149509] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.652 [2024-05-16 09:48:31.149516] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.652 [2024-05-16 09:48:31.153072] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.652 [2024-05-16 09:48:31.162161] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.653 [2024-05-16 09:48:31.162807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.653 [2024-05-16 09:48:31.163173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.653 [2024-05-16 09:48:31.163189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.653 [2024-05-16 09:48:31.163199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.653 [2024-05-16 09:48:31.163438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.653 [2024-05-16 09:48:31.163661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.653 [2024-05-16 09:48:31.163671] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.653 [2024-05-16 09:48:31.163679] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.653 [2024-05-16 09:48:31.167229] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.653 [2024-05-16 09:48:31.176004] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.653 [2024-05-16 09:48:31.176657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.653 [2024-05-16 09:48:31.177029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.653 [2024-05-16 09:48:31.177043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.653 [2024-05-16 09:48:31.177060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.653 [2024-05-16 09:48:31.177299] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.653 [2024-05-16 09:48:31.177522] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.653 [2024-05-16 09:48:31.177531] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.653 [2024-05-16 09:48:31.177538] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.653 [2024-05-16 09:48:31.181086] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.653 [2024-05-16 09:48:31.189869] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.653 [2024-05-16 09:48:31.190540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.653 [2024-05-16 09:48:31.190911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.653 [2024-05-16 09:48:31.190925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.653 [2024-05-16 09:48:31.190934] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.653 [2024-05-16 09:48:31.191182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.653 [2024-05-16 09:48:31.191406] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.653 [2024-05-16 09:48:31.191415] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.653 [2024-05-16 09:48:31.191423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.653 [2024-05-16 09:48:31.194969] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.653 [2024-05-16 09:48:31.203781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.653 [2024-05-16 09:48:31.204429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.653 [2024-05-16 09:48:31.204769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.653 [2024-05-16 09:48:31.204784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.653 [2024-05-16 09:48:31.204793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.653 [2024-05-16 09:48:31.205032] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.653 [2024-05-16 09:48:31.205261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.653 [2024-05-16 09:48:31.205272] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.653 [2024-05-16 09:48:31.205279] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.653 [2024-05-16 09:48:31.208826] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.915 [2024-05-16 09:48:31.217615] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.915 [2024-05-16 09:48:31.218177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.915 [2024-05-16 09:48:31.218548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.915 [2024-05-16 09:48:31.218562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.915 [2024-05-16 09:48:31.218571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.915 [2024-05-16 09:48:31.218810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.915 [2024-05-16 09:48:31.219033] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.915 [2024-05-16 09:48:31.219044] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.915 [2024-05-16 09:48:31.219058] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.915 [2024-05-16 09:48:31.222605] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.915 [2024-05-16 09:48:31.231612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.915 [2024-05-16 09:48:31.232346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.915 [2024-05-16 09:48:31.232743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.915 [2024-05-16 09:48:31.232757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.915 [2024-05-16 09:48:31.232766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.915 [2024-05-16 09:48:31.233005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.915 [2024-05-16 09:48:31.233237] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.915 [2024-05-16 09:48:31.233248] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.915 [2024-05-16 09:48:31.233255] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.915 [2024-05-16 09:48:31.236802] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.915 [2024-05-16 09:48:31.245578] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.915 [2024-05-16 09:48:31.246158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.915 [2024-05-16 09:48:31.246528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.915 [2024-05-16 09:48:31.246542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.915 [2024-05-16 09:48:31.246551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.915 [2024-05-16 09:48:31.246790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.915 [2024-05-16 09:48:31.247013] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.915 [2024-05-16 09:48:31.247022] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.915 [2024-05-16 09:48:31.247030] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.915 [2024-05-16 09:48:31.250586] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.915 [2024-05-16 09:48:31.259366] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.915 [2024-05-16 09:48:31.260022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.915 [2024-05-16 09:48:31.260374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.915 [2024-05-16 09:48:31.260393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.915 [2024-05-16 09:48:31.260402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.915 [2024-05-16 09:48:31.260641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.915 [2024-05-16 09:48:31.260864] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.915 [2024-05-16 09:48:31.260874] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.915 [2024-05-16 09:48:31.260881] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.915 [2024-05-16 09:48:31.264434] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.915 [2024-05-16 09:48:31.273222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.915 [2024-05-16 09:48:31.273859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.915 [2024-05-16 09:48:31.274200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.915 [2024-05-16 09:48:31.274216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.915 [2024-05-16 09:48:31.274225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.915 [2024-05-16 09:48:31.274464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.915 [2024-05-16 09:48:31.274687] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.915 [2024-05-16 09:48:31.274697] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.915 [2024-05-16 09:48:31.274704] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.915 [2024-05-16 09:48:31.278258] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.915 [2024-05-16 09:48:31.287048] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.915 [2024-05-16 09:48:31.287688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.915 [2024-05-16 09:48:31.288063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.915 [2024-05-16 09:48:31.288078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.915 [2024-05-16 09:48:31.288088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.915 [2024-05-16 09:48:31.288326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.915 [2024-05-16 09:48:31.288549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.915 [2024-05-16 09:48:31.288558] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.915 [2024-05-16 09:48:31.288566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.915 [2024-05-16 09:48:31.292114] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.915 [2024-05-16 09:48:31.300892] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.915 [2024-05-16 09:48:31.301513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.915 [2024-05-16 09:48:31.301850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.915 [2024-05-16 09:48:31.301864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.915 [2024-05-16 09:48:31.301878] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.915 [2024-05-16 09:48:31.302125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.915 [2024-05-16 09:48:31.302350] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.915 [2024-05-16 09:48:31.302358] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.915 [2024-05-16 09:48:31.302366] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.915 [2024-05-16 09:48:31.305914] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.915 [2024-05-16 09:48:31.314692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.915 [2024-05-16 09:48:31.315375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.915 [2024-05-16 09:48:31.315740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.915 [2024-05-16 09:48:31.315753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.915 [2024-05-16 09:48:31.315763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.915 [2024-05-16 09:48:31.316001] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.915 [2024-05-16 09:48:31.316233] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.915 [2024-05-16 09:48:31.316243] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.916 [2024-05-16 09:48:31.316251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.916 [2024-05-16 09:48:31.319805] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.916 [2024-05-16 09:48:31.328582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.916 [2024-05-16 09:48:31.329167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.916 [2024-05-16 09:48:31.329511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.916 [2024-05-16 09:48:31.329524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.916 [2024-05-16 09:48:31.329534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.916 [2024-05-16 09:48:31.329773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.916 [2024-05-16 09:48:31.329996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.916 [2024-05-16 09:48:31.330005] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.916 [2024-05-16 09:48:31.330013] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.916 [2024-05-16 09:48:31.333584] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.916 [2024-05-16 09:48:31.342378] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.916 [2024-05-16 09:48:31.342990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.916 [2024-05-16 09:48:31.343295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.916 [2024-05-16 09:48:31.343311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.916 [2024-05-16 09:48:31.343320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.916 [2024-05-16 09:48:31.343563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.916 [2024-05-16 09:48:31.343786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.916 [2024-05-16 09:48:31.343795] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.916 [2024-05-16 09:48:31.343803] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.916 [2024-05-16 09:48:31.347355] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.916 [2024-05-16 09:48:31.356341] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.916 [2024-05-16 09:48:31.356903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.916 [2024-05-16 09:48:31.357240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.916 [2024-05-16 09:48:31.357256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.916 [2024-05-16 09:48:31.357266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.916 [2024-05-16 09:48:31.357505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.916 [2024-05-16 09:48:31.357729] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.916 [2024-05-16 09:48:31.357739] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.916 [2024-05-16 09:48:31.357746] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.916 [2024-05-16 09:48:31.361301] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.916 [2024-05-16 09:48:31.370290] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.916 [2024-05-16 09:48:31.370821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.916 [2024-05-16 09:48:31.370997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.916 [2024-05-16 09:48:31.371009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.916 [2024-05-16 09:48:31.371017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.916 [2024-05-16 09:48:31.371243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.916 [2024-05-16 09:48:31.371465] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.916 [2024-05-16 09:48:31.371474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.916 [2024-05-16 09:48:31.371481] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.916 [2024-05-16 09:48:31.375022] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.916 [2024-05-16 09:48:31.384224] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.916 [2024-05-16 09:48:31.384743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.916 [2024-05-16 09:48:31.385077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.916 [2024-05-16 09:48:31.385088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.916 [2024-05-16 09:48:31.385096] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.916 [2024-05-16 09:48:31.385315] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.916 [2024-05-16 09:48:31.385539] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.916 [2024-05-16 09:48:31.385548] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.916 [2024-05-16 09:48:31.385555] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.916 [2024-05-16 09:48:31.389098] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.916 [2024-05-16 09:48:31.398080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.916 [2024-05-16 09:48:31.398718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.916 [2024-05-16 09:48:31.399060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.916 [2024-05-16 09:48:31.399075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.916 [2024-05-16 09:48:31.399084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.916 [2024-05-16 09:48:31.399323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.916 [2024-05-16 09:48:31.399546] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.916 [2024-05-16 09:48:31.399555] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.916 [2024-05-16 09:48:31.399563] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.916 [2024-05-16 09:48:31.403109] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.916 [2024-05-16 09:48:31.411921] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.916 [2024-05-16 09:48:31.412448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.916 [2024-05-16 09:48:31.412756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.916 [2024-05-16 09:48:31.412767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.916 [2024-05-16 09:48:31.412776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.916 [2024-05-16 09:48:31.412995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.916 [2024-05-16 09:48:31.413220] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.916 [2024-05-16 09:48:31.413230] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.916 [2024-05-16 09:48:31.413238] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.916 [2024-05-16 09:48:31.416816] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.916 [2024-05-16 09:48:31.425805] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.916 [2024-05-16 09:48:31.426461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.916 [2024-05-16 09:48:31.426797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.916 [2024-05-16 09:48:31.426811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.916 [2024-05-16 09:48:31.426821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.916 [2024-05-16 09:48:31.427068] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.916 [2024-05-16 09:48:31.427292] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.916 [2024-05-16 09:48:31.427305] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.916 [2024-05-16 09:48:31.427313] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.916 [2024-05-16 09:48:31.430871] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.916 [2024-05-16 09:48:31.439836] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.916 [2024-05-16 09:48:31.440590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.916 [2024-05-16 09:48:31.440884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.916 [2024-05-16 09:48:31.440898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.917 [2024-05-16 09:48:31.440908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.917 [2024-05-16 09:48:31.441156] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.917 [2024-05-16 09:48:31.441381] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.917 [2024-05-16 09:48:31.441389] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.917 [2024-05-16 09:48:31.441397] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.917 [2024-05-16 09:48:31.444945] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.917 [2024-05-16 09:48:31.453941] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.917 [2024-05-16 09:48:31.454514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.917 [2024-05-16 09:48:31.454818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.917 [2024-05-16 09:48:31.454829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.917 [2024-05-16 09:48:31.454837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.917 [2024-05-16 09:48:31.455063] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.917 [2024-05-16 09:48:31.455283] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.917 [2024-05-16 09:48:31.455293] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.917 [2024-05-16 09:48:31.455300] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.917 [2024-05-16 09:48:31.458843] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.917 [2024-05-16 09:48:31.467822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.917 [2024-05-16 09:48:31.468367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.917 [2024-05-16 09:48:31.468705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.917 [2024-05-16 09:48:31.468715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:37.917 [2024-05-16 09:48:31.468723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:37.917 [2024-05-16 09:48:31.468942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:37.917 [2024-05-16 09:48:31.469167] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.917 [2024-05-16 09:48:31.469176] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.917 [2024-05-16 09:48:31.469188] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.917 [2024-05-16 09:48:31.472731] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.180 [2024-05-16 09:48:31.481722] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.180 [2024-05-16 09:48:31.482274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.180 [2024-05-16 09:48:31.482602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.180 [2024-05-16 09:48:31.482613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.180 [2024-05-16 09:48:31.482621] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.180 [2024-05-16 09:48:31.482840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.180 [2024-05-16 09:48:31.483064] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.180 [2024-05-16 09:48:31.483074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.180 [2024-05-16 09:48:31.483081] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.180 [2024-05-16 09:48:31.486625] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.180 [2024-05-16 09:48:31.495607] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.180 [2024-05-16 09:48:31.496159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.180 [2024-05-16 09:48:31.496494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.180 [2024-05-16 09:48:31.496505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.180 [2024-05-16 09:48:31.496512] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.180 [2024-05-16 09:48:31.496732] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.180 [2024-05-16 09:48:31.496951] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.180 [2024-05-16 09:48:31.496959] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.180 [2024-05-16 09:48:31.496966] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.180 [2024-05-16 09:48:31.500508] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.180 [2024-05-16 09:48:31.509487] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.180 [2024-05-16 09:48:31.510025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.180 [2024-05-16 09:48:31.510397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.180 [2024-05-16 09:48:31.510411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.180 [2024-05-16 09:48:31.510420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.180 [2024-05-16 09:48:31.510659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.180 [2024-05-16 09:48:31.510881] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.180 [2024-05-16 09:48:31.510890] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.180 [2024-05-16 09:48:31.510898] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.180 [2024-05-16 09:48:31.514456] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.180 [2024-05-16 09:48:31.523443] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.180 [2024-05-16 09:48:31.524070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.180 [2024-05-16 09:48:31.524438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.180 [2024-05-16 09:48:31.524452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.180 [2024-05-16 09:48:31.524461] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.180 [2024-05-16 09:48:31.524700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.180 [2024-05-16 09:48:31.524923] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.180 [2024-05-16 09:48:31.524932] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.180 [2024-05-16 09:48:31.524941] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.180 [2024-05-16 09:48:31.528501] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.180 [2024-05-16 09:48:31.537303] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.180 [2024-05-16 09:48:31.537720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.180 [2024-05-16 09:48:31.538066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.180 [2024-05-16 09:48:31.538077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.180 [2024-05-16 09:48:31.538085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.181 [2024-05-16 09:48:31.538307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.181 [2024-05-16 09:48:31.538527] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.181 [2024-05-16 09:48:31.538536] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.181 [2024-05-16 09:48:31.538544] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.181 [2024-05-16 09:48:31.542092] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.181 [2024-05-16 09:48:31.551284] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.181 [2024-05-16 09:48:31.551793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.181 [2024-05-16 09:48:31.552101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.181 [2024-05-16 09:48:31.552112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.181 [2024-05-16 09:48:31.552120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.181 [2024-05-16 09:48:31.552339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.181 [2024-05-16 09:48:31.552558] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.181 [2024-05-16 09:48:31.552567] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.181 [2024-05-16 09:48:31.552575] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.181 [2024-05-16 09:48:31.556117] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.181 [2024-05-16 09:48:31.565104] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.181 [2024-05-16 09:48:31.565762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.181 [2024-05-16 09:48:31.566131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.181 [2024-05-16 09:48:31.566145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.181 [2024-05-16 09:48:31.566155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.181 [2024-05-16 09:48:31.566393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.181 [2024-05-16 09:48:31.566616] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.181 [2024-05-16 09:48:31.566625] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.181 [2024-05-16 09:48:31.566633] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.181 [2024-05-16 09:48:31.570185] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.181 [2024-05-16 09:48:31.578963] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.181 [2024-05-16 09:48:31.579528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.181 [2024-05-16 09:48:31.579855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.181 [2024-05-16 09:48:31.579866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.181 [2024-05-16 09:48:31.579873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.181 [2024-05-16 09:48:31.580099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.181 [2024-05-16 09:48:31.580319] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.181 [2024-05-16 09:48:31.580328] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.181 [2024-05-16 09:48:31.580336] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.181 [2024-05-16 09:48:31.583885] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.181 [2024-05-16 09:48:31.592871] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.181 [2024-05-16 09:48:31.593402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.181 [2024-05-16 09:48:31.593738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.181 [2024-05-16 09:48:31.593749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.181 [2024-05-16 09:48:31.593756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.181 [2024-05-16 09:48:31.593975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.181 [2024-05-16 09:48:31.594200] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.181 [2024-05-16 09:48:31.594210] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.181 [2024-05-16 09:48:31.594218] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.181 [2024-05-16 09:48:31.597758] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.181 [2024-05-16 09:48:31.606741] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.181 [2024-05-16 09:48:31.607374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.181 [2024-05-16 09:48:31.607718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.181 [2024-05-16 09:48:31.607733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.181 [2024-05-16 09:48:31.607742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.181 [2024-05-16 09:48:31.607981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.181 [2024-05-16 09:48:31.608214] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.181 [2024-05-16 09:48:31.608224] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.181 [2024-05-16 09:48:31.608232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.181 [2024-05-16 09:48:31.611778] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.181 [2024-05-16 09:48:31.620589] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.181 [2024-05-16 09:48:31.621174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.181 [2024-05-16 09:48:31.621541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.181 [2024-05-16 09:48:31.621555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.181 [2024-05-16 09:48:31.621565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.181 [2024-05-16 09:48:31.621803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.181 [2024-05-16 09:48:31.622026] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.181 [2024-05-16 09:48:31.622036] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.181 [2024-05-16 09:48:31.622043] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.181 [2024-05-16 09:48:31.625595] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.181 [2024-05-16 09:48:31.634403] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.181 [2024-05-16 09:48:31.635045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.181 [2024-05-16 09:48:31.635399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.181 [2024-05-16 09:48:31.635412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.181 [2024-05-16 09:48:31.635422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.181 [2024-05-16 09:48:31.635661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.181 [2024-05-16 09:48:31.635883] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.181 [2024-05-16 09:48:31.635893] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.181 [2024-05-16 09:48:31.635900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.181 [2024-05-16 09:48:31.639452] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.181 [2024-05-16 09:48:31.648325] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.181 [2024-05-16 09:48:31.649000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.181 [2024-05-16 09:48:31.649347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.181 [2024-05-16 09:48:31.649366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.181 [2024-05-16 09:48:31.649376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.181 [2024-05-16 09:48:31.649614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.181 [2024-05-16 09:48:31.649837] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.181 [2024-05-16 09:48:31.649846] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.181 [2024-05-16 09:48:31.649854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.181 [2024-05-16 09:48:31.653405] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.181 [2024-05-16 09:48:31.662185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.181 [2024-05-16 09:48:31.662789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.181 [2024-05-16 09:48:31.663160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.181 [2024-05-16 09:48:31.663176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.181 [2024-05-16 09:48:31.663185] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.181 [2024-05-16 09:48:31.663424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.181 [2024-05-16 09:48:31.663647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.181 [2024-05-16 09:48:31.663657] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.181 [2024-05-16 09:48:31.663665] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.181 [2024-05-16 09:48:31.667219] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.181 [2024-05-16 09:48:31.676015] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.181 [2024-05-16 09:48:31.676686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.182 [2024-05-16 09:48:31.677067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.182 [2024-05-16 09:48:31.677082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.182 [2024-05-16 09:48:31.677091] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.182 [2024-05-16 09:48:31.677330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.182 [2024-05-16 09:48:31.677553] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.182 [2024-05-16 09:48:31.677562] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.182 [2024-05-16 09:48:31.677570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.182 [2024-05-16 09:48:31.681120] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.182 [2024-05-16 09:48:31.689911] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.182 [2024-05-16 09:48:31.690482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.182 [2024-05-16 09:48:31.690702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.182 [2024-05-16 09:48:31.690713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.182 [2024-05-16 09:48:31.690725] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.182 [2024-05-16 09:48:31.690946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.182 [2024-05-16 09:48:31.691173] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.182 [2024-05-16 09:48:31.691182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.182 [2024-05-16 09:48:31.691189] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.182 [2024-05-16 09:48:31.694732] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.182 [2024-05-16 09:48:31.703715] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.182 [2024-05-16 09:48:31.704364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.182 [2024-05-16 09:48:31.704617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.182 [2024-05-16 09:48:31.704630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.182 [2024-05-16 09:48:31.704640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.182 [2024-05-16 09:48:31.704879] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.182 [2024-05-16 09:48:31.705111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.182 [2024-05-16 09:48:31.705121] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.182 [2024-05-16 09:48:31.705129] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.182 [2024-05-16 09:48:31.708678] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.182 [2024-05-16 09:48:31.717666] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.182 [2024-05-16 09:48:31.718194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.182 [2024-05-16 09:48:31.718523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.182 [2024-05-16 09:48:31.718537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.182 [2024-05-16 09:48:31.718546] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.182 [2024-05-16 09:48:31.718784] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.182 [2024-05-16 09:48:31.719007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.182 [2024-05-16 09:48:31.719016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.182 [2024-05-16 09:48:31.719024] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.182 [2024-05-16 09:48:31.722575] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.182 [2024-05-16 09:48:31.731574] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.182 [2024-05-16 09:48:31.732173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.182 [2024-05-16 09:48:31.732493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.182 [2024-05-16 09:48:31.732507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.182 [2024-05-16 09:48:31.732517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.182 [2024-05-16 09:48:31.732759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.182 [2024-05-16 09:48:31.732982] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.182 [2024-05-16 09:48:31.732992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.182 [2024-05-16 09:48:31.733000] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.182 [2024-05-16 09:48:31.736557] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.444 [2024-05-16 09:48:31.745548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.444 [2024-05-16 09:48:31.746158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.444 [2024-05-16 09:48:31.746496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.444 [2024-05-16 09:48:31.746510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.444 [2024-05-16 09:48:31.746519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.444 [2024-05-16 09:48:31.746758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.444 [2024-05-16 09:48:31.746981] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.444 [2024-05-16 09:48:31.746991] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.444 [2024-05-16 09:48:31.746999] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.444 [2024-05-16 09:48:31.750553] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.444 [2024-05-16 09:48:31.759343] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.444 [2024-05-16 09:48:31.760004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.444 [2024-05-16 09:48:31.760341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.444 [2024-05-16 09:48:31.760356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.444 [2024-05-16 09:48:31.760365] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.444 [2024-05-16 09:48:31.760604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.444 [2024-05-16 09:48:31.760827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.444 [2024-05-16 09:48:31.760836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.444 [2024-05-16 09:48:31.760844] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.444 [2024-05-16 09:48:31.764393] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.444 [2024-05-16 09:48:31.773177] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.444 [2024-05-16 09:48:31.773815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.444 [2024-05-16 09:48:31.774124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.444 [2024-05-16 09:48:31.774139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.444 [2024-05-16 09:48:31.774149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.444 [2024-05-16 09:48:31.774388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.444 [2024-05-16 09:48:31.774617] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.444 [2024-05-16 09:48:31.774626] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.444 [2024-05-16 09:48:31.774633] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.444 [2024-05-16 09:48:31.778183] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.444 [2024-05-16 09:48:31.786973] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.444 [2024-05-16 09:48:31.787597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.444 [2024-05-16 09:48:31.787974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.444 [2024-05-16 09:48:31.787989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.444 [2024-05-16 09:48:31.787998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.444 [2024-05-16 09:48:31.788245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.444 [2024-05-16 09:48:31.788470] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.444 [2024-05-16 09:48:31.788479] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.444 [2024-05-16 09:48:31.788487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.444 [2024-05-16 09:48:31.792033] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.444 [2024-05-16 09:48:31.800848] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.444 [2024-05-16 09:48:31.801491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.444 [2024-05-16 09:48:31.801858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.444 [2024-05-16 09:48:31.801872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.444 [2024-05-16 09:48:31.801882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.444 [2024-05-16 09:48:31.802128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.444 [2024-05-16 09:48:31.802352] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.444 [2024-05-16 09:48:31.802362] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.444 [2024-05-16 09:48:31.802369] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.444 [2024-05-16 09:48:31.805916] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.444 [2024-05-16 09:48:31.814699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.444 [2024-05-16 09:48:31.815379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.444 [2024-05-16 09:48:31.815749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.444 [2024-05-16 09:48:31.815762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.444 [2024-05-16 09:48:31.815772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.444 [2024-05-16 09:48:31.816011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.444 [2024-05-16 09:48:31.816240] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.444 [2024-05-16 09:48:31.816254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.444 [2024-05-16 09:48:31.816262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.444 [2024-05-16 09:48:31.819812] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.444 [2024-05-16 09:48:31.828624] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.444 [2024-05-16 09:48:31.829176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.444 [2024-05-16 09:48:31.829518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.444 [2024-05-16 09:48:31.829532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.444 [2024-05-16 09:48:31.829541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.444 [2024-05-16 09:48:31.829780] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.444 [2024-05-16 09:48:31.830003] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.445 [2024-05-16 09:48:31.830012] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.445 [2024-05-16 09:48:31.830020] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.445 [2024-05-16 09:48:31.833596] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.445 [2024-05-16 09:48:31.842595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.445 [2024-05-16 09:48:31.843178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.843546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.843560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.445 [2024-05-16 09:48:31.843569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.445 [2024-05-16 09:48:31.843808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.445 [2024-05-16 09:48:31.844031] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.445 [2024-05-16 09:48:31.844040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.445 [2024-05-16 09:48:31.844047] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.445 [2024-05-16 09:48:31.847606] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.445 [2024-05-16 09:48:31.856386] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.445 [2024-05-16 09:48:31.856998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.857340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.857355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.445 [2024-05-16 09:48:31.857365] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.445 [2024-05-16 09:48:31.857603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.445 [2024-05-16 09:48:31.857827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.445 [2024-05-16 09:48:31.857837] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.445 [2024-05-16 09:48:31.857848] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.445 [2024-05-16 09:48:31.861400] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.445 [2024-05-16 09:48:31.870181] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.445 [2024-05-16 09:48:31.870719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.871064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.871079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.445 [2024-05-16 09:48:31.871088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.445 [2024-05-16 09:48:31.871327] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.445 [2024-05-16 09:48:31.871550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.445 [2024-05-16 09:48:31.871559] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.445 [2024-05-16 09:48:31.871567] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.445 [2024-05-16 09:48:31.875115] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.445 [2024-05-16 09:48:31.884117] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.445 [2024-05-16 09:48:31.884761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.885134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.885150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.445 [2024-05-16 09:48:31.885159] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.445 [2024-05-16 09:48:31.885398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.445 [2024-05-16 09:48:31.885621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.445 [2024-05-16 09:48:31.885631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.445 [2024-05-16 09:48:31.885639] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.445 [2024-05-16 09:48:31.889186] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.445 [2024-05-16 09:48:31.897965] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.445 [2024-05-16 09:48:31.898625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.898949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.898962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.445 [2024-05-16 09:48:31.898972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.445 [2024-05-16 09:48:31.899219] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.445 [2024-05-16 09:48:31.899444] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.445 [2024-05-16 09:48:31.899453] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.445 [2024-05-16 09:48:31.899460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.445 [2024-05-16 09:48:31.903012] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.445 [2024-05-16 09:48:31.911791] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.445 [2024-05-16 09:48:31.912416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.912785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.912799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.445 [2024-05-16 09:48:31.912808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.445 [2024-05-16 09:48:31.913047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.445 [2024-05-16 09:48:31.913281] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.445 [2024-05-16 09:48:31.913291] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.445 [2024-05-16 09:48:31.913298] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.445 [2024-05-16 09:48:31.916845] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.445 [2024-05-16 09:48:31.925625] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.445 [2024-05-16 09:48:31.926349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.926718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.926732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.445 [2024-05-16 09:48:31.926741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.445 [2024-05-16 09:48:31.926980] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.445 [2024-05-16 09:48:31.927212] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.445 [2024-05-16 09:48:31.927222] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.445 [2024-05-16 09:48:31.927229] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.445 [2024-05-16 09:48:31.930776] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.445 [2024-05-16 09:48:31.939574] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.445 [2024-05-16 09:48:31.940159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.940533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.940547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.445 [2024-05-16 09:48:31.940556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.445 [2024-05-16 09:48:31.940795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.445 [2024-05-16 09:48:31.941018] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.445 [2024-05-16 09:48:31.941027] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.445 [2024-05-16 09:48:31.941035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.445 [2024-05-16 09:48:31.944592] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.445 [2024-05-16 09:48:31.953379] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.445 [2024-05-16 09:48:31.954038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.954386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.954400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.445 [2024-05-16 09:48:31.954409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.445 [2024-05-16 09:48:31.954648] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.445 [2024-05-16 09:48:31.954870] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.445 [2024-05-16 09:48:31.954881] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.445 [2024-05-16 09:48:31.954888] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.445 [2024-05-16 09:48:31.958439] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.445 [2024-05-16 09:48:31.967222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.445 [2024-05-16 09:48:31.967796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.445 [2024-05-16 09:48:31.968158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.446 [2024-05-16 09:48:31.968173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.446 [2024-05-16 09:48:31.968183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.446 [2024-05-16 09:48:31.968421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.446 [2024-05-16 09:48:31.968645] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.446 [2024-05-16 09:48:31.968654] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.446 [2024-05-16 09:48:31.968662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.446 [2024-05-16 09:48:31.972213] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.446 [2024-05-16 09:48:31.981207] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.446 [2024-05-16 09:48:31.981768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.446 [2024-05-16 09:48:31.982113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.446 [2024-05-16 09:48:31.982125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.446 [2024-05-16 09:48:31.982133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.446 [2024-05-16 09:48:31.982359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.446 [2024-05-16 09:48:31.982581] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.446 [2024-05-16 09:48:31.982589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.446 [2024-05-16 09:48:31.982596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.446 [2024-05-16 09:48:31.986141] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.446 [2024-05-16 09:48:31.995130] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.446 [2024-05-16 09:48:31.995738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.446 [2024-05-16 09:48:31.996071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.446 [2024-05-16 09:48:31.996086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.446 [2024-05-16 09:48:31.996097] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.446 [2024-05-16 09:48:31.996335] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.446 [2024-05-16 09:48:31.996558] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.446 [2024-05-16 09:48:31.996568] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.446 [2024-05-16 09:48:31.996576] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.446 [2024-05-16 09:48:32.000129] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.708 [2024-05-16 09:48:32.009125] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.708 [2024-05-16 09:48:32.009653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.708 [2024-05-16 09:48:32.009944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.708 [2024-05-16 09:48:32.009956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.708 [2024-05-16 09:48:32.009964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.708 [2024-05-16 09:48:32.010188] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.708 [2024-05-16 09:48:32.010409] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.708 [2024-05-16 09:48:32.010418] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.708 [2024-05-16 09:48:32.010425] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.708 [2024-05-16 09:48:32.013968] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.708 [2024-05-16 09:48:32.022956] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.708 [2024-05-16 09:48:32.023613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.708 [2024-05-16 09:48:32.023942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.708 [2024-05-16 09:48:32.023956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.708 [2024-05-16 09:48:32.023966] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.708 [2024-05-16 09:48:32.024211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.708 [2024-05-16 09:48:32.024435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.708 [2024-05-16 09:48:32.024444] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.708 [2024-05-16 09:48:32.024452] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.708 [2024-05-16 09:48:32.027999] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.708 [2024-05-16 09:48:32.036834] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.708 [2024-05-16 09:48:32.037508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.708 [2024-05-16 09:48:32.037833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.708 [2024-05-16 09:48:32.037851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.708 [2024-05-16 09:48:32.037862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.708 [2024-05-16 09:48:32.038108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.708 [2024-05-16 09:48:32.038332] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.708 [2024-05-16 09:48:32.038341] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.708 [2024-05-16 09:48:32.038350] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.708 [2024-05-16 09:48:32.041897] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.708 [2024-05-16 09:48:32.050684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.708 [2024-05-16 09:48:32.051215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.708 [2024-05-16 09:48:32.051529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.051541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.709 [2024-05-16 09:48:32.051548] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.709 [2024-05-16 09:48:32.051768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.709 [2024-05-16 09:48:32.051987] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.709 [2024-05-16 09:48:32.051997] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.709 [2024-05-16 09:48:32.052004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.709 [2024-05-16 09:48:32.055554] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.709 [2024-05-16 09:48:32.064571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.709 [2024-05-16 09:48:32.065158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.065534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.065548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.709 [2024-05-16 09:48:32.065558] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.709 [2024-05-16 09:48:32.065796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.709 [2024-05-16 09:48:32.066020] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.709 [2024-05-16 09:48:32.066029] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.709 [2024-05-16 09:48:32.066037] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.709 [2024-05-16 09:48:32.069594] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.709 [2024-05-16 09:48:32.078381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.709 [2024-05-16 09:48:32.078943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.079239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.079252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.709 [2024-05-16 09:48:32.079264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.709 [2024-05-16 09:48:32.079484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.709 [2024-05-16 09:48:32.079705] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.709 [2024-05-16 09:48:32.079713] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.709 [2024-05-16 09:48:32.079721] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.709 [2024-05-16 09:48:32.083273] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.709 [2024-05-16 09:48:32.092267] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.709 [2024-05-16 09:48:32.092824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.093156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.093167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.709 [2024-05-16 09:48:32.093175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.709 [2024-05-16 09:48:32.093394] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.709 [2024-05-16 09:48:32.093613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.709 [2024-05-16 09:48:32.093623] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.709 [2024-05-16 09:48:32.093630] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.709 [2024-05-16 09:48:32.097177] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.709 [2024-05-16 09:48:32.106169] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.709 [2024-05-16 09:48:32.106721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.107547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.107570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.709 [2024-05-16 09:48:32.107578] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.709 [2024-05-16 09:48:32.107803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.709 [2024-05-16 09:48:32.108025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.709 [2024-05-16 09:48:32.108033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.709 [2024-05-16 09:48:32.108040] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.709 [2024-05-16 09:48:32.111593] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.709 [2024-05-16 09:48:32.119954] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.709 [2024-05-16 09:48:32.120500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.120860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.120872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.709 [2024-05-16 09:48:32.120879] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.709 [2024-05-16 09:48:32.121108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.709 [2024-05-16 09:48:32.121328] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.709 [2024-05-16 09:48:32.121338] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.709 [2024-05-16 09:48:32.121345] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.709 [2024-05-16 09:48:32.124885] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.709 [2024-05-16 09:48:32.133916] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.709 [2024-05-16 09:48:32.134602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.134973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.134987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.709 [2024-05-16 09:48:32.134997] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.709 [2024-05-16 09:48:32.135244] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.709 [2024-05-16 09:48:32.135467] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.709 [2024-05-16 09:48:32.135476] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.709 [2024-05-16 09:48:32.135484] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.709 [2024-05-16 09:48:32.139030] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.709 [2024-05-16 09:48:32.147811] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.709 [2024-05-16 09:48:32.148379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.148685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.148696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.709 [2024-05-16 09:48:32.148704] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.709 [2024-05-16 09:48:32.148923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.709 [2024-05-16 09:48:32.149147] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.709 [2024-05-16 09:48:32.149157] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.709 [2024-05-16 09:48:32.149164] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.709 [2024-05-16 09:48:32.152713] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.709 [2024-05-16 09:48:32.161698] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.709 [2024-05-16 09:48:32.162416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.162786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.162800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.709 [2024-05-16 09:48:32.162809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.709 [2024-05-16 09:48:32.163048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.709 [2024-05-16 09:48:32.163280] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.709 [2024-05-16 09:48:32.163290] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.709 [2024-05-16 09:48:32.163298] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.709 [2024-05-16 09:48:32.166848] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.709 [2024-05-16 09:48:32.175630] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.709 [2024-05-16 09:48:32.176189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.176504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.709 [2024-05-16 09:48:32.176515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.709 [2024-05-16 09:48:32.176522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.709 [2024-05-16 09:48:32.176741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.709 [2024-05-16 09:48:32.176961] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.709 [2024-05-16 09:48:32.176970] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.710 [2024-05-16 09:48:32.176977] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.710 [2024-05-16 09:48:32.180523] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.710 [2024-05-16 09:48:32.189602] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.710 [2024-05-16 09:48:32.190162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.710 [2024-05-16 09:48:32.190491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.710 [2024-05-16 09:48:32.190501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.710 [2024-05-16 09:48:32.190509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.710 [2024-05-16 09:48:32.190728] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.710 [2024-05-16 09:48:32.190947] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.710 [2024-05-16 09:48:32.190956] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.710 [2024-05-16 09:48:32.190963] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.710 [2024-05-16 09:48:32.194510] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.710 [2024-05-16 09:48:32.203497] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.710 [2024-05-16 09:48:32.204184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.710 [2024-05-16 09:48:32.204570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.710 [2024-05-16 09:48:32.204585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.710 [2024-05-16 09:48:32.204594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.710 [2024-05-16 09:48:32.204833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.710 [2024-05-16 09:48:32.205065] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.710 [2024-05-16 09:48:32.205079] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.710 [2024-05-16 09:48:32.205086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.710 [2024-05-16 09:48:32.208636] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.710 [2024-05-16 09:48:32.217421] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.710 [2024-05-16 09:48:32.218101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.710 [2024-05-16 09:48:32.218474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.710 [2024-05-16 09:48:32.218488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.710 [2024-05-16 09:48:32.218498] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.710 [2024-05-16 09:48:32.218737] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.710 [2024-05-16 09:48:32.218960] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.710 [2024-05-16 09:48:32.218969] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.710 [2024-05-16 09:48:32.218977] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.710 [2024-05-16 09:48:32.222533] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.710 [2024-05-16 09:48:32.231327] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.710 [2024-05-16 09:48:32.231985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.710 [2024-05-16 09:48:32.232322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.710 [2024-05-16 09:48:32.232337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.710 [2024-05-16 09:48:32.232347] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.710 [2024-05-16 09:48:32.232592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.710 [2024-05-16 09:48:32.232816] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.710 [2024-05-16 09:48:32.232827] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.710 [2024-05-16 09:48:32.232835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.710 [2024-05-16 09:48:32.236387] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.710 [2024-05-16 09:48:32.245200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.710 [2024-05-16 09:48:32.245652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.710 [2024-05-16 09:48:32.245989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.710 [2024-05-16 09:48:32.246001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.710 [2024-05-16 09:48:32.246009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.710 [2024-05-16 09:48:32.246261] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.710 [2024-05-16 09:48:32.246482] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.710 [2024-05-16 09:48:32.246491] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.710 [2024-05-16 09:48:32.246502] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.710 [2024-05-16 09:48:32.250046] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.710 [2024-05-16 09:48:32.259040] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.710 [2024-05-16 09:48:32.259699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.710 [2024-05-16 09:48:32.260081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.710 [2024-05-16 09:48:32.260096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.710 [2024-05-16 09:48:32.260105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.710 [2024-05-16 09:48:32.260344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.710 [2024-05-16 09:48:32.260567] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.710 [2024-05-16 09:48:32.260576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.710 [2024-05-16 09:48:32.260584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.710 [2024-05-16 09:48:32.264138] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.972 [2024-05-16 09:48:32.272924] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.972 [2024-05-16 09:48:32.273485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.972 [2024-05-16 09:48:32.273843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.972 [2024-05-16 09:48:32.273854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.972 [2024-05-16 09:48:32.273862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.972 [2024-05-16 09:48:32.274088] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.972 [2024-05-16 09:48:32.274310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.972 [2024-05-16 09:48:32.274319] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.972 [2024-05-16 09:48:32.274327] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.972 [2024-05-16 09:48:32.277869] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.972 [2024-05-16 09:48:32.286866] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.972 [2024-05-16 09:48:32.287591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.972 [2024-05-16 09:48:32.287981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.972 [2024-05-16 09:48:32.287995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.972 [2024-05-16 09:48:32.288005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.972 [2024-05-16 09:48:32.288251] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.972 [2024-05-16 09:48:32.288475] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.972 [2024-05-16 09:48:32.288484] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.972 [2024-05-16 09:48:32.288492] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.972 [2024-05-16 09:48:32.292040] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.972 [2024-05-16 09:48:32.300823] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.972 [2024-05-16 09:48:32.301484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.972 [2024-05-16 09:48:32.301829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.972 [2024-05-16 09:48:32.301843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.972 [2024-05-16 09:48:32.301852] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.972 [2024-05-16 09:48:32.302097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.972 [2024-05-16 09:48:32.302321] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.972 [2024-05-16 09:48:32.302330] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.972 [2024-05-16 09:48:32.302338] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.972 [2024-05-16 09:48:32.305888] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.972 [2024-05-16 09:48:32.314667] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.972 [2024-05-16 09:48:32.315308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.972 [2024-05-16 09:48:32.315667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.972 [2024-05-16 09:48:32.315681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.972 [2024-05-16 09:48:32.315690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.972 [2024-05-16 09:48:32.315929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.972 [2024-05-16 09:48:32.316159] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.972 [2024-05-16 09:48:32.316169] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.972 [2024-05-16 09:48:32.316177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.972 [2024-05-16 09:48:32.319722] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.972 [2024-05-16 09:48:32.328508] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.972 [2024-05-16 09:48:32.328902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.972 [2024-05-16 09:48:32.329279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.972 [2024-05-16 09:48:32.329317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.972 [2024-05-16 09:48:32.329328] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.972 [2024-05-16 09:48:32.329566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.972 [2024-05-16 09:48:32.329790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.972 [2024-05-16 09:48:32.329799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.972 [2024-05-16 09:48:32.329807] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.972 [2024-05-16 09:48:32.333385] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.972 [2024-05-16 09:48:32.342390] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.972 [2024-05-16 09:48:32.342933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.972 [2024-05-16 09:48:32.343239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.972 [2024-05-16 09:48:32.343251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.972 [2024-05-16 09:48:32.343259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.972 [2024-05-16 09:48:32.343480] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.972 [2024-05-16 09:48:32.343699] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.972 [2024-05-16 09:48:32.343709] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.972 [2024-05-16 09:48:32.343716] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.972 [2024-05-16 09:48:32.347259] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.972 [2024-05-16 09:48:32.356248] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.972 [2024-05-16 09:48:32.356936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.972 [2024-05-16 09:48:32.357318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.972 [2024-05-16 09:48:32.357334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.972 [2024-05-16 09:48:32.357344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.972 [2024-05-16 09:48:32.357583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.972 [2024-05-16 09:48:32.357806] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.972 [2024-05-16 09:48:32.357816] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.972 [2024-05-16 09:48:32.357824] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.972 [2024-05-16 09:48:32.361376] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.972 [2024-05-16 09:48:32.370162] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.972 [2024-05-16 09:48:32.370682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.972 [2024-05-16 09:48:32.371015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.972 [2024-05-16 09:48:32.371026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.972 [2024-05-16 09:48:32.371034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.972 [2024-05-16 09:48:32.371259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.973 [2024-05-16 09:48:32.371480] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.973 [2024-05-16 09:48:32.371489] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.973 [2024-05-16 09:48:32.371496] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.973 [2024-05-16 09:48:32.375037] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.973 [2024-05-16 09:48:32.384029] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.973 [2024-05-16 09:48:32.384680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.385065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.385079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.973 [2024-05-16 09:48:32.385089] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.973 [2024-05-16 09:48:32.385328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.973 [2024-05-16 09:48:32.385551] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.973 [2024-05-16 09:48:32.385560] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.973 [2024-05-16 09:48:32.385568] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.973 [2024-05-16 09:48:32.389121] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.973 [2024-05-16 09:48:32.397895] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.973 [2024-05-16 09:48:32.398435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.398756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.398767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.973 [2024-05-16 09:48:32.398775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.973 [2024-05-16 09:48:32.398994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.973 [2024-05-16 09:48:32.399218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.973 [2024-05-16 09:48:32.399229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.973 [2024-05-16 09:48:32.399236] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.973 [2024-05-16 09:48:32.402779] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.973 [2024-05-16 09:48:32.411771] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.973 [2024-05-16 09:48:32.412437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.412786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.412800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.973 [2024-05-16 09:48:32.412809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.973 [2024-05-16 09:48:32.413048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.973 [2024-05-16 09:48:32.413280] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.973 [2024-05-16 09:48:32.413289] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.973 [2024-05-16 09:48:32.413297] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.973 [2024-05-16 09:48:32.416845] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.973 [2024-05-16 09:48:32.425626] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.973 [2024-05-16 09:48:32.426056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.426380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.426395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.973 [2024-05-16 09:48:32.426403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.973 [2024-05-16 09:48:32.426623] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.973 [2024-05-16 09:48:32.426843] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.973 [2024-05-16 09:48:32.426852] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.973 [2024-05-16 09:48:32.426859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.973 [2024-05-16 09:48:32.430407] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.973 [2024-05-16 09:48:32.439416] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.973 [2024-05-16 09:48:32.439830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.440340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.440377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.973 [2024-05-16 09:48:32.440388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.973 [2024-05-16 09:48:32.440627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.973 [2024-05-16 09:48:32.440849] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.973 [2024-05-16 09:48:32.440857] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.973 [2024-05-16 09:48:32.440865] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.973 [2024-05-16 09:48:32.444422] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.973 [2024-05-16 09:48:32.453431] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.973 [2024-05-16 09:48:32.454013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.454236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.454248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.973 [2024-05-16 09:48:32.454256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.973 [2024-05-16 09:48:32.454476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.973 [2024-05-16 09:48:32.454694] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.973 [2024-05-16 09:48:32.454702] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.973 [2024-05-16 09:48:32.454709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.973 [2024-05-16 09:48:32.458259] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.973 [2024-05-16 09:48:32.467252] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.973 [2024-05-16 09:48:32.467801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.468094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.468106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.973 [2024-05-16 09:48:32.468118] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.973 [2024-05-16 09:48:32.468338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.973 [2024-05-16 09:48:32.468556] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.973 [2024-05-16 09:48:32.468564] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.973 [2024-05-16 09:48:32.468571] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.973 [2024-05-16 09:48:32.472116] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.973 [2024-05-16 09:48:32.481103] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.973 [2024-05-16 09:48:32.481644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.481936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.481946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.973 [2024-05-16 09:48:32.481953] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.973 [2024-05-16 09:48:32.482177] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.973 [2024-05-16 09:48:32.482397] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.973 [2024-05-16 09:48:32.482404] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.973 [2024-05-16 09:48:32.482412] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.973 [2024-05-16 09:48:32.485960] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.973 [2024-05-16 09:48:32.494950] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.973 [2024-05-16 09:48:32.495473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.495758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.495768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.973 [2024-05-16 09:48:32.495775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.973 [2024-05-16 09:48:32.495994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.973 [2024-05-16 09:48:32.496218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.973 [2024-05-16 09:48:32.496226] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.973 [2024-05-16 09:48:32.496234] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.973 [2024-05-16 09:48:32.499776] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.973 [2024-05-16 09:48:32.508767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.973 [2024-05-16 09:48:32.509393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.973 [2024-05-16 09:48:32.509766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.974 [2024-05-16 09:48:32.509780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.974 [2024-05-16 09:48:32.509789] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.974 [2024-05-16 09:48:32.510032] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.974 [2024-05-16 09:48:32.510260] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.974 [2024-05-16 09:48:32.510269] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.974 [2024-05-16 09:48:32.510277] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.974 [2024-05-16 09:48:32.513827] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.974 [2024-05-16 09:48:32.522612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.974 [2024-05-16 09:48:32.523161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.974 [2024-05-16 09:48:32.523537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.974 [2024-05-16 09:48:32.523550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:38.974 [2024-05-16 09:48:32.523559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:38.974 [2024-05-16 09:48:32.523797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:38.974 [2024-05-16 09:48:32.524018] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.974 [2024-05-16 09:48:32.524026] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.974 [2024-05-16 09:48:32.524034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.974 [2024-05-16 09:48:32.527586] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.236 [2024-05-16 09:48:32.536595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.236 [2024-05-16 09:48:32.537142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.236 [2024-05-16 09:48:32.537509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.236 [2024-05-16 09:48:32.537522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.236 [2024-05-16 09:48:32.537531] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.236 [2024-05-16 09:48:32.537769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.236 [2024-05-16 09:48:32.537992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.236 [2024-05-16 09:48:32.538000] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.236 [2024-05-16 09:48:32.538008] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.236 [2024-05-16 09:48:32.541557] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.236 [2024-05-16 09:48:32.550552] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.236 [2024-05-16 09:48:32.551578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.236 [2024-05-16 09:48:32.551878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.236 [2024-05-16 09:48:32.551890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.236 [2024-05-16 09:48:32.551898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.236 [2024-05-16 09:48:32.552132] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.236 [2024-05-16 09:48:32.552357] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.236 [2024-05-16 09:48:32.552366] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.236 [2024-05-16 09:48:32.552373] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.236 [2024-05-16 09:48:32.555921] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.236 [2024-05-16 09:48:32.564515] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.236 [2024-05-16 09:48:32.565090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.236 [2024-05-16 09:48:32.565393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.236 [2024-05-16 09:48:32.565403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.236 [2024-05-16 09:48:32.565410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.236 [2024-05-16 09:48:32.565630] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.236 [2024-05-16 09:48:32.565848] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.236 [2024-05-16 09:48:32.565856] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.236 [2024-05-16 09:48:32.565863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.236 [2024-05-16 09:48:32.569410] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.236 [2024-05-16 09:48:32.578398] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.236 [2024-05-16 09:48:32.578926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.236 [2024-05-16 09:48:32.579219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.236 [2024-05-16 09:48:32.579231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.236 [2024-05-16 09:48:32.579239] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.236 [2024-05-16 09:48:32.579459] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.236 [2024-05-16 09:48:32.579677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.236 [2024-05-16 09:48:32.579685] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.236 [2024-05-16 09:48:32.579691] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.236 [2024-05-16 09:48:32.583243] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.236 [2024-05-16 09:48:32.592237] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.236 [2024-05-16 09:48:32.592890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.236 [2024-05-16 09:48:32.593222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.236 [2024-05-16 09:48:32.593237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.236 [2024-05-16 09:48:32.593246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.236 [2024-05-16 09:48:32.593485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.237 [2024-05-16 09:48:32.593707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.237 [2024-05-16 09:48:32.593720] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.237 [2024-05-16 09:48:32.593728] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.237 [2024-05-16 09:48:32.597284] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.237 [2024-05-16 09:48:32.606066] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.237 [2024-05-16 09:48:32.606605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.606918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.606928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.237 [2024-05-16 09:48:32.606935] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.237 [2024-05-16 09:48:32.607161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.237 [2024-05-16 09:48:32.607380] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.237 [2024-05-16 09:48:32.607388] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.237 [2024-05-16 09:48:32.607395] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.237 [2024-05-16 09:48:32.610937] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.237 [2024-05-16 09:48:32.619929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.237 [2024-05-16 09:48:32.620567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.620887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.620899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.237 [2024-05-16 09:48:32.620908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.237 [2024-05-16 09:48:32.621153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.237 [2024-05-16 09:48:32.621376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.237 [2024-05-16 09:48:32.621385] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.237 [2024-05-16 09:48:32.621392] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.237 [2024-05-16 09:48:32.624942] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.237 [2024-05-16 09:48:32.633748] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.237 [2024-05-16 09:48:32.634394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.634715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.634728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.237 [2024-05-16 09:48:32.634738] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.237 [2024-05-16 09:48:32.634976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.237 [2024-05-16 09:48:32.635204] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.237 [2024-05-16 09:48:32.635213] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.237 [2024-05-16 09:48:32.635225] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.237 [2024-05-16 09:48:32.638776] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.237 [2024-05-16 09:48:32.647561] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.237 [2024-05-16 09:48:32.648259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.648580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.648593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.237 [2024-05-16 09:48:32.648602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.237 [2024-05-16 09:48:32.648840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.237 [2024-05-16 09:48:32.649070] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.237 [2024-05-16 09:48:32.649079] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.237 [2024-05-16 09:48:32.649086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.237 [2024-05-16 09:48:32.652634] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.237 [2024-05-16 09:48:32.661449] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.237 [2024-05-16 09:48:32.662074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.662407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.662420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.237 [2024-05-16 09:48:32.662429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.237 [2024-05-16 09:48:32.662667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.237 [2024-05-16 09:48:32.662889] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.237 [2024-05-16 09:48:32.662897] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.237 [2024-05-16 09:48:32.662905] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.237 [2024-05-16 09:48:32.666455] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.237 [2024-05-16 09:48:32.675444] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.237 [2024-05-16 09:48:32.676013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.676334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.676344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.237 [2024-05-16 09:48:32.676352] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.237 [2024-05-16 09:48:32.676571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.237 [2024-05-16 09:48:32.676789] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.237 [2024-05-16 09:48:32.676797] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.237 [2024-05-16 09:48:32.676804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.237 [2024-05-16 09:48:32.680352] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.237 [2024-05-16 09:48:32.689441] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.237 [2024-05-16 09:48:32.690043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.690333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.690343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.237 [2024-05-16 09:48:32.690351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.237 [2024-05-16 09:48:32.690571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.237 [2024-05-16 09:48:32.690790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.237 [2024-05-16 09:48:32.690798] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.237 [2024-05-16 09:48:32.690805] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.237 [2024-05-16 09:48:32.694371] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.237 [2024-05-16 09:48:32.703441] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.237 [2024-05-16 09:48:32.704067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.704413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.704426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.237 [2024-05-16 09:48:32.704435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.237 [2024-05-16 09:48:32.704674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.237 [2024-05-16 09:48:32.704896] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.237 [2024-05-16 09:48:32.704904] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.237 [2024-05-16 09:48:32.704911] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.237 [2024-05-16 09:48:32.708463] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.237 [2024-05-16 09:48:32.717241] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.237 [2024-05-16 09:48:32.717856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.718179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.718194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.237 [2024-05-16 09:48:32.718203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.237 [2024-05-16 09:48:32.718442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.237 [2024-05-16 09:48:32.718664] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.237 [2024-05-16 09:48:32.718672] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.237 [2024-05-16 09:48:32.718680] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.237 [2024-05-16 09:48:32.722233] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.237 [2024-05-16 09:48:32.731245] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.237 [2024-05-16 09:48:32.731900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.237 [2024-05-16 09:48:32.732226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.238 [2024-05-16 09:48:32.732241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.238 [2024-05-16 09:48:32.732251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.238 [2024-05-16 09:48:32.732489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.238 [2024-05-16 09:48:32.732711] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.238 [2024-05-16 09:48:32.732719] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.238 [2024-05-16 09:48:32.732726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.238 [2024-05-16 09:48:32.736282] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.238 [2024-05-16 09:48:32.745062] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.238 [2024-05-16 09:48:32.745710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.238 [2024-05-16 09:48:32.745923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.238 [2024-05-16 09:48:32.745936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.238 [2024-05-16 09:48:32.745945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.238 [2024-05-16 09:48:32.746192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.238 [2024-05-16 09:48:32.746417] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.238 [2024-05-16 09:48:32.746425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.238 [2024-05-16 09:48:32.746432] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.238 [2024-05-16 09:48:32.749978] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.238 [2024-05-16 09:48:32.758962] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.238 [2024-05-16 09:48:32.759582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.238 [2024-05-16 09:48:32.759905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.238 [2024-05-16 09:48:32.759918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.238 [2024-05-16 09:48:32.759927] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.238 [2024-05-16 09:48:32.760174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.238 [2024-05-16 09:48:32.760397] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.238 [2024-05-16 09:48:32.760406] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.238 [2024-05-16 09:48:32.760413] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.238 [2024-05-16 09:48:32.763957] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.238 [2024-05-16 09:48:32.772937] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.238 [2024-05-16 09:48:32.773607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.238 [2024-05-16 09:48:32.773924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.238 [2024-05-16 09:48:32.773937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.238 [2024-05-16 09:48:32.773946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.238 [2024-05-16 09:48:32.774192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.238 [2024-05-16 09:48:32.774415] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.238 [2024-05-16 09:48:32.774424] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.238 [2024-05-16 09:48:32.774431] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.238 [2024-05-16 09:48:32.777976] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.238 [2024-05-16 09:48:32.786758] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.238 [2024-05-16 09:48:32.787428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.238 [2024-05-16 09:48:32.787825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.238 [2024-05-16 09:48:32.787838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.238 [2024-05-16 09:48:32.787847] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.238 [2024-05-16 09:48:32.788094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.238 [2024-05-16 09:48:32.788317] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.238 [2024-05-16 09:48:32.788325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.238 [2024-05-16 09:48:32.788332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.238 [2024-05-16 09:48:32.791880] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.500 [2024-05-16 09:48:32.800674] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.500 [2024-05-16 09:48:32.801357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.801678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.801691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.500 [2024-05-16 09:48:32.801701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.500 [2024-05-16 09:48:32.801939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.500 [2024-05-16 09:48:32.802170] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.500 [2024-05-16 09:48:32.802180] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.500 [2024-05-16 09:48:32.802188] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.500 [2024-05-16 09:48:32.805735] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.500 [2024-05-16 09:48:32.814507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.500 [2024-05-16 09:48:32.815161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.815497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.815514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.500 [2024-05-16 09:48:32.815524] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.500 [2024-05-16 09:48:32.815762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.500 [2024-05-16 09:48:32.815984] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.500 [2024-05-16 09:48:32.815992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.500 [2024-05-16 09:48:32.815999] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.500 [2024-05-16 09:48:32.819555] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.500 [2024-05-16 09:48:32.828332] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.500 [2024-05-16 09:48:32.828988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.829307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.829321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.500 [2024-05-16 09:48:32.829330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.500 [2024-05-16 09:48:32.829568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.500 [2024-05-16 09:48:32.829790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.500 [2024-05-16 09:48:32.829799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.500 [2024-05-16 09:48:32.829806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.500 [2024-05-16 09:48:32.833372] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.500 [2024-05-16 09:48:32.842157] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.500 [2024-05-16 09:48:32.842830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.843155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.843169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.500 [2024-05-16 09:48:32.843179] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.500 [2024-05-16 09:48:32.843417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.500 [2024-05-16 09:48:32.843639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.500 [2024-05-16 09:48:32.843647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.500 [2024-05-16 09:48:32.843654] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.500 [2024-05-16 09:48:32.847203] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.500 [2024-05-16 09:48:32.855972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.500 [2024-05-16 09:48:32.856629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.856946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.856959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.500 [2024-05-16 09:48:32.856972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.500 [2024-05-16 09:48:32.857220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.500 [2024-05-16 09:48:32.857443] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.500 [2024-05-16 09:48:32.857451] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.500 [2024-05-16 09:48:32.857458] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.500 [2024-05-16 09:48:32.861008] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.500 [2024-05-16 09:48:32.869836] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.500 [2024-05-16 09:48:32.870461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.870787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.870800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.500 [2024-05-16 09:48:32.870809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.500 [2024-05-16 09:48:32.871048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.500 [2024-05-16 09:48:32.871277] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.500 [2024-05-16 09:48:32.871287] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.500 [2024-05-16 09:48:32.871294] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.500 [2024-05-16 09:48:32.874842] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.500 [2024-05-16 09:48:32.883643] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.500 [2024-05-16 09:48:32.884329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.884567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.884580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.500 [2024-05-16 09:48:32.884589] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.500 [2024-05-16 09:48:32.884828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.500 [2024-05-16 09:48:32.885050] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.500 [2024-05-16 09:48:32.885065] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.500 [2024-05-16 09:48:32.885073] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.500 [2024-05-16 09:48:32.888622] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.500 [2024-05-16 09:48:32.897609] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.500 [2024-05-16 09:48:32.898278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.898597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.898610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.500 [2024-05-16 09:48:32.898619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.500 [2024-05-16 09:48:32.898862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.500 [2024-05-16 09:48:32.899090] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.500 [2024-05-16 09:48:32.899100] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.500 [2024-05-16 09:48:32.899107] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.500 [2024-05-16 09:48:32.902656] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.500 [2024-05-16 09:48:32.911437] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.500 [2024-05-16 09:48:32.912096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.912488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.912501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.500 [2024-05-16 09:48:32.912510] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.500 [2024-05-16 09:48:32.912748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.500 [2024-05-16 09:48:32.912970] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.500 [2024-05-16 09:48:32.912978] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.500 [2024-05-16 09:48:32.912985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.500 [2024-05-16 09:48:32.916539] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.500 [2024-05-16 09:48:32.925322] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.500 [2024-05-16 09:48:32.926006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.926347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.500 [2024-05-16 09:48:32.926361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.501 [2024-05-16 09:48:32.926370] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.501 [2024-05-16 09:48:32.926609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.501 [2024-05-16 09:48:32.926830] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.501 [2024-05-16 09:48:32.926839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.501 [2024-05-16 09:48:32.926846] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.501 [2024-05-16 09:48:32.930399] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.501 [2024-05-16 09:48:32.939203] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.501 [2024-05-16 09:48:32.939734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.501 [2024-05-16 09:48:32.940029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.501 [2024-05-16 09:48:32.940039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.501 [2024-05-16 09:48:32.940046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.501 [2024-05-16 09:48:32.940273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.501 [2024-05-16 09:48:32.940496] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.501 [2024-05-16 09:48:32.940504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.501 [2024-05-16 09:48:32.940511] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.501 [2024-05-16 09:48:32.944058] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.501 [2024-05-16 09:48:32.953049] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.501 [2024-05-16 09:48:32.953691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.501 [2024-05-16 09:48:32.954011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.501 [2024-05-16 09:48:32.954024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.501 [2024-05-16 09:48:32.954033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.501 [2024-05-16 09:48:32.954280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.501 [2024-05-16 09:48:32.954502] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.501 [2024-05-16 09:48:32.954511] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.501 [2024-05-16 09:48:32.954518] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.501 [2024-05-16 09:48:32.958069] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.501 [2024-05-16 09:48:32.966857] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.501 [2024-05-16 09:48:32.967376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.501 [2024-05-16 09:48:32.967686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.501 [2024-05-16 09:48:32.967695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.501 [2024-05-16 09:48:32.967703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.501 [2024-05-16 09:48:32.967922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.501 [2024-05-16 09:48:32.968147] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.501 [2024-05-16 09:48:32.968156] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.501 [2024-05-16 09:48:32.968163] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.501 [2024-05-16 09:48:32.971705] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.501 [2024-05-16 09:48:32.980685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.501 [2024-05-16 09:48:32.981239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.501 [2024-05-16 09:48:32.981463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.501 [2024-05-16 09:48:32.981473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.501 [2024-05-16 09:48:32.981480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.501 [2024-05-16 09:48:32.981699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.501 [2024-05-16 09:48:32.981917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.501 [2024-05-16 09:48:32.981929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.501 [2024-05-16 09:48:32.981936] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.501 [2024-05-16 09:48:32.985489] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.501 [2024-05-16 09:48:32.994469] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.501 [2024-05-16 09:48:32.995124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.501 [2024-05-16 09:48:32.995516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.501 [2024-05-16 09:48:32.995528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.501 [2024-05-16 09:48:32.995538] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.501 [2024-05-16 09:48:32.995777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.501 [2024-05-16 09:48:32.995998] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.501 [2024-05-16 09:48:32.996006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.501 [2024-05-16 09:48:32.996013] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.501 [2024-05-16 09:48:32.999568] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.501 [2024-05-16 09:48:33.008352] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.501 [2024-05-16 09:48:33.008898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.501 [2024-05-16 09:48:33.009237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.501 [2024-05-16 09:48:33.009252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.501 [2024-05-16 09:48:33.009261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.501 [2024-05-16 09:48:33.009499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.501 [2024-05-16 09:48:33.009721] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.501 [2024-05-16 09:48:33.009729] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.501 [2024-05-16 09:48:33.009737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.501 [2024-05-16 09:48:33.013286] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.501 [2024-05-16 09:48:33.022272] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.501 [2024-05-16 09:48:33.022928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.501 [2024-05-16 09:48:33.023159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.501 [2024-05-16 09:48:33.023173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.501 [2024-05-16 09:48:33.023182] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.501 [2024-05-16 09:48:33.023421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.501 [2024-05-16 09:48:33.023643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.501 [2024-05-16 09:48:33.023651] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.501 [2024-05-16 09:48:33.023662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.501 [2024-05-16 09:48:33.027214] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.501 [2024-05-16 09:48:33.036220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.501 [2024-05-16 09:48:33.036826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.501 [2024-05-16 09:48:33.037148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.501 [2024-05-16 09:48:33.037162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.501 [2024-05-16 09:48:33.037171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.501 [2024-05-16 09:48:33.037410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.501 [2024-05-16 09:48:33.037632] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.501 [2024-05-16 09:48:33.037640] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.501 [2024-05-16 09:48:33.037647] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.501 [2024-05-16 09:48:33.041197] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.501 [2024-05-16 09:48:33.050183] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.502 [2024-05-16 09:48:33.050838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.502 [2024-05-16 09:48:33.051141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.502 [2024-05-16 09:48:33.051155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.502 [2024-05-16 09:48:33.051165] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.502 [2024-05-16 09:48:33.051404] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.502 [2024-05-16 09:48:33.051626] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.502 [2024-05-16 09:48:33.051634] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.502 [2024-05-16 09:48:33.051641] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.502 [2024-05-16 09:48:33.055193] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.765 [2024-05-16 09:48:33.063980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.765 [2024-05-16 09:48:33.064539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.765 [2024-05-16 09:48:33.064860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.765 [2024-05-16 09:48:33.064869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.765 [2024-05-16 09:48:33.064877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.765 [2024-05-16 09:48:33.065104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.765 [2024-05-16 09:48:33.065324] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.765 [2024-05-16 09:48:33.065331] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.765 [2024-05-16 09:48:33.065338] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.765 [2024-05-16 09:48:33.068885] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.765 [2024-05-16 09:48:33.077901] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.765 [2024-05-16 09:48:33.078569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.765 [2024-05-16 09:48:33.078890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.765 [2024-05-16 09:48:33.078902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.765 [2024-05-16 09:48:33.078912] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.765 [2024-05-16 09:48:33.079160] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.765 [2024-05-16 09:48:33.079383] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.765 [2024-05-16 09:48:33.079391] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.765 [2024-05-16 09:48:33.079399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.765 [2024-05-16 09:48:33.082943] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.765 [2024-05-16 09:48:33.091730] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.765 [2024-05-16 09:48:33.092389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.765 [2024-05-16 09:48:33.092709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.765 [2024-05-16 09:48:33.092721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.765 [2024-05-16 09:48:33.092731] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.765 [2024-05-16 09:48:33.092969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.765 [2024-05-16 09:48:33.093199] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.765 [2024-05-16 09:48:33.093213] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.765 [2024-05-16 09:48:33.093221] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.765 [2024-05-16 09:48:33.096768] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.765 [2024-05-16 09:48:33.105544] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.765 [2024-05-16 09:48:33.106097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.765 [2024-05-16 09:48:33.106490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.765 [2024-05-16 09:48:33.106503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.765 [2024-05-16 09:48:33.106513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.765 [2024-05-16 09:48:33.106751] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.765 [2024-05-16 09:48:33.106973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.765 [2024-05-16 09:48:33.106981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.765 [2024-05-16 09:48:33.106988] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.765 [2024-05-16 09:48:33.110543] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.765 [2024-05-16 09:48:33.119533] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.765 [2024-05-16 09:48:33.120098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.766 [2024-05-16 09:48:33.120419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.766 [2024-05-16 09:48:33.120429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.766 [2024-05-16 09:48:33.120437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.766 [2024-05-16 09:48:33.120661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.766 [2024-05-16 09:48:33.120880] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.766 [2024-05-16 09:48:33.120887] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.766 [2024-05-16 09:48:33.120894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.766 [2024-05-16 09:48:33.124447] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.766 [2024-05-16 09:48:33.133436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.766 [2024-05-16 09:48:33.133958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.766 [2024-05-16 09:48:33.134267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.766 [2024-05-16 09:48:33.134278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.766 [2024-05-16 09:48:33.134286] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.766 [2024-05-16 09:48:33.134505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.766 [2024-05-16 09:48:33.134724] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.766 [2024-05-16 09:48:33.134732] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.766 [2024-05-16 09:48:33.134739] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.766 [2024-05-16 09:48:33.138280] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.766 [2024-05-16 09:48:33.147293] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.766 [2024-05-16 09:48:33.147820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.766 [2024-05-16 09:48:33.148131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.766 [2024-05-16 09:48:33.148142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.766 [2024-05-16 09:48:33.148150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.766 [2024-05-16 09:48:33.148370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.766 [2024-05-16 09:48:33.148588] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.766 [2024-05-16 09:48:33.148596] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.766 [2024-05-16 09:48:33.148602] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.766 [2024-05-16 09:48:33.152150] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.766 [2024-05-16 09:48:33.161136] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.766 [2024-05-16 09:48:33.161707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.766 [2024-05-16 09:48:33.161999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.766 [2024-05-16 09:48:33.162009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.766 [2024-05-16 09:48:33.162016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.766 [2024-05-16 09:48:33.162241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.766 [2024-05-16 09:48:33.162459] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.766 [2024-05-16 09:48:33.162467] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.766 [2024-05-16 09:48:33.162473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.766 [2024-05-16 09:48:33.166007] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.766 [2024-05-16 09:48:33.174981] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.766 [2024-05-16 09:48:33.175541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.766 [2024-05-16 09:48:33.175840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.766 [2024-05-16 09:48:33.175849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.766 [2024-05-16 09:48:33.175856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.766 [2024-05-16 09:48:33.176080] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.766 [2024-05-16 09:48:33.176299] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.766 [2024-05-16 09:48:33.176307] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.766 [2024-05-16 09:48:33.176313] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.766 [2024-05-16 09:48:33.179854] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.766 [2024-05-16 09:48:33.188838] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.766 [2024-05-16 09:48:33.189456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.766 [2024-05-16 09:48:33.189780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.766 [2024-05-16 09:48:33.189793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.766 [2024-05-16 09:48:33.189802] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.766 [2024-05-16 09:48:33.190040] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.766 [2024-05-16 09:48:33.190272] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.766 [2024-05-16 09:48:33.190282] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.766 [2024-05-16 09:48:33.190289] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.766 [2024-05-16 09:48:33.193839] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.766 [2024-05-16 09:48:33.202822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.766 [2024-05-16 09:48:33.203430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.766 [2024-05-16 09:48:33.203661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.766 [2024-05-16 09:48:33.203677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.766 [2024-05-16 09:48:33.203687] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.766 [2024-05-16 09:48:33.203925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.767 [2024-05-16 09:48:33.204156] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.767 [2024-05-16 09:48:33.204165] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.767 [2024-05-16 09:48:33.204173] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.767 [2024-05-16 09:48:33.207719] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.767 [2024-05-16 09:48:33.216712] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.767 [2024-05-16 09:48:33.217371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.767 [2024-05-16 09:48:33.217683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.767 [2024-05-16 09:48:33.217695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.767 [2024-05-16 09:48:33.217704] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.767 [2024-05-16 09:48:33.217943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.767 [2024-05-16 09:48:33.218175] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.767 [2024-05-16 09:48:33.218189] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.767 [2024-05-16 09:48:33.218196] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.767 [2024-05-16 09:48:33.221741] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.767 [2024-05-16 09:48:33.230514] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.767 [2024-05-16 09:48:33.231148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.767 [2024-05-16 09:48:33.231462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.767 [2024-05-16 09:48:33.231474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.767 [2024-05-16 09:48:33.231483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.767 [2024-05-16 09:48:33.231721] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.767 [2024-05-16 09:48:33.231943] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.767 [2024-05-16 09:48:33.231951] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.767 [2024-05-16 09:48:33.231959] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.767 [2024-05-16 09:48:33.235526] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.767 [2024-05-16 09:48:33.244309] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.767 [2024-05-16 09:48:33.244874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.767 [2024-05-16 09:48:33.245197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.767 [2024-05-16 09:48:33.245208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.767 [2024-05-16 09:48:33.245220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.767 [2024-05-16 09:48:33.245439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.767 [2024-05-16 09:48:33.245658] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.767 [2024-05-16 09:48:33.245665] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.767 [2024-05-16 09:48:33.245672] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.767 [2024-05-16 09:48:33.249215] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.767 [2024-05-16 09:48:33.258200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.767 [2024-05-16 09:48:33.258851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.767 [2024-05-16 09:48:33.259194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.767 [2024-05-16 09:48:33.259208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.767 [2024-05-16 09:48:33.259217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.767 [2024-05-16 09:48:33.259455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.767 [2024-05-16 09:48:33.259677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.767 [2024-05-16 09:48:33.259684] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.767 [2024-05-16 09:48:33.259692] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.767 [2024-05-16 09:48:33.263240] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.767 [2024-05-16 09:48:33.272013] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.767 [2024-05-16 09:48:33.272575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.767 [2024-05-16 09:48:33.272884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.767 [2024-05-16 09:48:33.272893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.767 [2024-05-16 09:48:33.272900] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.767 [2024-05-16 09:48:33.273125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.767 [2024-05-16 09:48:33.273344] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.767 [2024-05-16 09:48:33.273352] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.767 [2024-05-16 09:48:33.273358] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.767 [2024-05-16 09:48:33.276899] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.767 [2024-05-16 09:48:33.285912] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.767 [2024-05-16 09:48:33.286577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.767 [2024-05-16 09:48:33.286936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.767 [2024-05-16 09:48:33.286948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.767 [2024-05-16 09:48:33.286957] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.767 [2024-05-16 09:48:33.287208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.767 [2024-05-16 09:48:33.287431] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.767 [2024-05-16 09:48:33.287439] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.767 [2024-05-16 09:48:33.287446] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.767 [2024-05-16 09:48:33.290991] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.767 [2024-05-16 09:48:33.299769] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.768 [2024-05-16 09:48:33.300385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.768 [2024-05-16 09:48:33.300637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.768 [2024-05-16 09:48:33.300649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.768 [2024-05-16 09:48:33.300658] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.768 [2024-05-16 09:48:33.300897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.768 [2024-05-16 09:48:33.301128] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.768 [2024-05-16 09:48:33.301137] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.768 [2024-05-16 09:48:33.301145] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.768 [2024-05-16 09:48:33.304690] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:39.768 [2024-05-16 09:48:33.313674] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:39.768 [2024-05-16 09:48:33.314289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.768 [2024-05-16 09:48:33.314637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:39.768 [2024-05-16 09:48:33.314650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:39.768 [2024-05-16 09:48:33.314659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:39.768 [2024-05-16 09:48:33.314897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:39.768 [2024-05-16 09:48:33.315126] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:39.768 [2024-05-16 09:48:33.315135] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:39.768 [2024-05-16 09:48:33.315142] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:39.768 [2024-05-16 09:48:33.318691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.031 [2024-05-16 09:48:33.327477] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.031 [2024-05-16 09:48:33.328001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.031 [2024-05-16 09:48:33.328182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.031 [2024-05-16 09:48:33.328195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.031 [2024-05-16 09:48:33.328203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.031 [2024-05-16 09:48:33.328423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.031 [2024-05-16 09:48:33.328646] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.031 [2024-05-16 09:48:33.328654] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.031 [2024-05-16 09:48:33.328661] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.031 [2024-05-16 09:48:33.332223] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.031 [2024-05-16 09:48:33.341432] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.031 [2024-05-16 09:48:33.342086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.031 [2024-05-16 09:48:33.342432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.031 [2024-05-16 09:48:33.342444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.031 [2024-05-16 09:48:33.342453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.031 [2024-05-16 09:48:33.342692] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.031 [2024-05-16 09:48:33.342914] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.031 [2024-05-16 09:48:33.342922] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.031 [2024-05-16 09:48:33.342929] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.031 [2024-05-16 09:48:33.346483] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.031 [2024-05-16 09:48:33.355259] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.031 [2024-05-16 09:48:33.355916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.031 [2024-05-16 09:48:33.356272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.031 [2024-05-16 09:48:33.356286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.031 [2024-05-16 09:48:33.356295] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.031 [2024-05-16 09:48:33.356534] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.031 [2024-05-16 09:48:33.356756] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.031 [2024-05-16 09:48:33.356763] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.031 [2024-05-16 09:48:33.356771] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.031 [2024-05-16 09:48:33.360322] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.031 [2024-05-16 09:48:33.369094] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.031 [2024-05-16 09:48:33.369748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.031 [2024-05-16 09:48:33.369945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.031 [2024-05-16 09:48:33.369959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.031 [2024-05-16 09:48:33.369969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.031 [2024-05-16 09:48:33.370215] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.031 [2024-05-16 09:48:33.370440] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.031 [2024-05-16 09:48:33.370451] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.031 [2024-05-16 09:48:33.370459] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.031 [2024-05-16 09:48:33.374007] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.031 [2024-05-16 09:48:33.382995] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.031 [2024-05-16 09:48:33.383526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.031 [2024-05-16 09:48:33.383909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.031 [2024-05-16 09:48:33.383922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.031 [2024-05-16 09:48:33.383931] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.031 [2024-05-16 09:48:33.384183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.031 [2024-05-16 09:48:33.384408] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.031 [2024-05-16 09:48:33.384416] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.031 [2024-05-16 09:48:33.384423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.031 [2024-05-16 09:48:33.387969] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.031 [2024-05-16 09:48:33.396955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.031 [2024-05-16 09:48:33.397642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.031 [2024-05-16 09:48:33.397961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.032 [2024-05-16 09:48:33.397974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.032 [2024-05-16 09:48:33.397983] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.032 [2024-05-16 09:48:33.398230] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.032 [2024-05-16 09:48:33.398453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.032 [2024-05-16 09:48:33.398462] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.032 [2024-05-16 09:48:33.398469] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.032 [2024-05-16 09:48:33.402016] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.032 [2024-05-16 09:48:33.410790] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.032 [2024-05-16 09:48:33.411431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.032 [2024-05-16 09:48:33.411754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.032 [2024-05-16 09:48:33.411766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.032 [2024-05-16 09:48:33.411775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.032 [2024-05-16 09:48:33.412014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.032 [2024-05-16 09:48:33.412245] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.032 [2024-05-16 09:48:33.412255] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.032 [2024-05-16 09:48:33.412266] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.032 [2024-05-16 09:48:33.415812] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.032 [2024-05-16 09:48:33.424605] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.032 [2024-05-16 09:48:33.425226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.032 [2024-05-16 09:48:33.425547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.032 [2024-05-16 09:48:33.425560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.032 [2024-05-16 09:48:33.425569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.032 [2024-05-16 09:48:33.425807] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.032 [2024-05-16 09:48:33.426029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.032 [2024-05-16 09:48:33.426038] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.032 [2024-05-16 09:48:33.426045] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.032 [2024-05-16 09:48:33.429601] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.032 [2024-05-16 09:48:33.438600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.032 [2024-05-16 09:48:33.439226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.032 [2024-05-16 09:48:33.439544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.032 [2024-05-16 09:48:33.439557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.032 [2024-05-16 09:48:33.439566] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.032 [2024-05-16 09:48:33.439804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.032 [2024-05-16 09:48:33.440026] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.032 [2024-05-16 09:48:33.440035] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.032 [2024-05-16 09:48:33.440042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.032 [2024-05-16 09:48:33.443597] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.032 [2024-05-16 09:48:33.452581] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.032 [2024-05-16 09:48:33.453014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.032 [2024-05-16 09:48:33.453347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.032 [2024-05-16 09:48:33.453362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.032 [2024-05-16 09:48:33.453370] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.032 [2024-05-16 09:48:33.453591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.032 [2024-05-16 09:48:33.453811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.032 [2024-05-16 09:48:33.453819] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.032 [2024-05-16 09:48:33.453826] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.032 [2024-05-16 09:48:33.457432] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.032 [2024-05-16 09:48:33.466422] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.032 [2024-05-16 09:48:33.466873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.032 [2024-05-16 09:48:33.467259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.032 [2024-05-16 09:48:33.467270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.032 [2024-05-16 09:48:33.467277] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.032 [2024-05-16 09:48:33.467497] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.032 [2024-05-16 09:48:33.467715] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.032 [2024-05-16 09:48:33.467723] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.032 [2024-05-16 09:48:33.467730] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.032 [2024-05-16 09:48:33.471271] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.032 [2024-05-16 09:48:33.480247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.032 [2024-05-16 09:48:33.480811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.032 [2024-05-16 09:48:33.480982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.032 [2024-05-16 09:48:33.480992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.032 [2024-05-16 09:48:33.480999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.032 [2024-05-16 09:48:33.481223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.032 [2024-05-16 09:48:33.481442] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.032 [2024-05-16 09:48:33.481450] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.032 [2024-05-16 09:48:33.481457] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.032 [2024-05-16 09:48:33.485001] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.032 [2024-05-16 09:48:33.494222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.032 [2024-05-16 09:48:33.494837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.032 [2024-05-16 09:48:33.495155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.032 [2024-05-16 09:48:33.495169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.033 [2024-05-16 09:48:33.495178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.033 [2024-05-16 09:48:33.495417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.033 [2024-05-16 09:48:33.495639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.033 [2024-05-16 09:48:33.495647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.033 [2024-05-16 09:48:33.495654] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.033 [2024-05-16 09:48:33.499204] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.033 [2024-05-16 09:48:33.508201] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.033 [2024-05-16 09:48:33.508819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.033 [2024-05-16 09:48:33.509142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.033 [2024-05-16 09:48:33.509155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.033 [2024-05-16 09:48:33.509165] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.033 [2024-05-16 09:48:33.509403] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.033 [2024-05-16 09:48:33.509625] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.033 [2024-05-16 09:48:33.509634] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.033 [2024-05-16 09:48:33.509641] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.033 [2024-05-16 09:48:33.513190] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.033 [2024-05-16 09:48:33.522176] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.033 [2024-05-16 09:48:33.522850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.033 [2024-05-16 09:48:33.523178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.033 [2024-05-16 09:48:33.523192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.033 [2024-05-16 09:48:33.523201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.033 [2024-05-16 09:48:33.523440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.033 [2024-05-16 09:48:33.523661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.033 [2024-05-16 09:48:33.523669] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.033 [2024-05-16 09:48:33.523677] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.033 [2024-05-16 09:48:33.527226] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.033 [2024-05-16 09:48:33.536019] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.033 [2024-05-16 09:48:33.536638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.033 [2024-05-16 09:48:33.536957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.033 [2024-05-16 09:48:33.536970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.033 [2024-05-16 09:48:33.536980] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.033 [2024-05-16 09:48:33.537227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.033 [2024-05-16 09:48:33.537450] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.033 [2024-05-16 09:48:33.537458] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.033 [2024-05-16 09:48:33.537465] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.033 [2024-05-16 09:48:33.541010] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.033 [2024-05-16 09:48:33.549989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.033 [2024-05-16 09:48:33.550542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.033 [2024-05-16 09:48:33.550883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.033 [2024-05-16 09:48:33.550895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.033 [2024-05-16 09:48:33.550905] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.033 [2024-05-16 09:48:33.551151] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.033 [2024-05-16 09:48:33.551374] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.033 [2024-05-16 09:48:33.551382] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.033 [2024-05-16 09:48:33.551390] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.033 [2024-05-16 09:48:33.554934] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.033 [2024-05-16 09:48:33.563917] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.033 [2024-05-16 09:48:33.564458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.033 [2024-05-16 09:48:33.564760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.033 [2024-05-16 09:48:33.564770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.033 [2024-05-16 09:48:33.564777] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.033 [2024-05-16 09:48:33.564997] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.033 [2024-05-16 09:48:33.565222] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.033 [2024-05-16 09:48:33.565230] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.033 [2024-05-16 09:48:33.565237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.033 [2024-05-16 09:48:33.568780] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.033 [2024-05-16 09:48:33.577781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.033 [2024-05-16 09:48:33.578405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.033 [2024-05-16 09:48:33.578733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.033 [2024-05-16 09:48:33.578747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.033 [2024-05-16 09:48:33.578757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.033 [2024-05-16 09:48:33.578995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.033 [2024-05-16 09:48:33.579224] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.033 [2024-05-16 09:48:33.579233] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.033 [2024-05-16 09:48:33.579240] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.033 [2024-05-16 09:48:33.582794] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 530856 Killed "${NVMF_APP[@]}" "$@" 00:35:40.033 09:48:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:40.033 09:48:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:40.297 09:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:40.297 09:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:40.297 09:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.297 [2024-05-16 09:48:33.591592] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.297 [2024-05-16 09:48:33.592162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.297 [2024-05-16 09:48:33.592501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.297 [2024-05-16 09:48:33.592510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.297 [2024-05-16 09:48:33.592518] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.297 [2024-05-16 09:48:33.592737] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.297 [2024-05-16 09:48:33.592956] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.297 [2024-05-16 09:48:33.592964] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.297 [2024-05-16 09:48:33.592971] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.297 [2024-05-16 09:48:33.596522] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.297 09:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=532543 00:35:40.297 09:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 532543 00:35:40.297 09:48:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:40.297 09:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 532543 ']' 00:35:40.297 09:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:40.297 09:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:40.297 09:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:40.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:40.297 09:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:40.297 09:48:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.297 [2024-05-16 09:48:33.605525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.297 [2024-05-16 09:48:33.606201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.297 [2024-05-16 09:48:33.606535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.297 [2024-05-16 09:48:33.606549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.297 [2024-05-16 09:48:33.606558] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.297 [2024-05-16 09:48:33.606796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.297 [2024-05-16 09:48:33.607019] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.297 [2024-05-16 09:48:33.607027] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.297 [2024-05-16 09:48:33.607034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.297 [2024-05-16 09:48:33.610587] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.297 [2024-05-16 09:48:33.619368] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.297 [2024-05-16 09:48:33.620030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.297 [2024-05-16 09:48:33.620391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.297 [2024-05-16 09:48:33.620405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.297 [2024-05-16 09:48:33.620414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.297 [2024-05-16 09:48:33.620652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.297 [2024-05-16 09:48:33.620875] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.297 [2024-05-16 09:48:33.620884] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.297 [2024-05-16 09:48:33.620892] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.297 [2024-05-16 09:48:33.624446] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.297 [2024-05-16 09:48:33.633239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.297 [2024-05-16 09:48:33.633861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.297 [2024-05-16 09:48:33.634197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.297 [2024-05-16 09:48:33.634213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.297 [2024-05-16 09:48:33.634222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.297 [2024-05-16 09:48:33.634466] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.297 [2024-05-16 09:48:33.634694] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.297 [2024-05-16 09:48:33.634704] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.297 [2024-05-16 09:48:33.634712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.297 [2024-05-16 09:48:33.638269] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.297 [2024-05-16 09:48:33.645984] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:35:40.297 [2024-05-16 09:48:33.646027] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:40.297 [2024-05-16 09:48:33.647050] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.297 [2024-05-16 09:48:33.647713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.297 [2024-05-16 09:48:33.648039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.297 [2024-05-16 09:48:33.648060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.297 [2024-05-16 09:48:33.648070] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.297 [2024-05-16 09:48:33.648309] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.297 [2024-05-16 09:48:33.648532] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.297 [2024-05-16 09:48:33.648540] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.297 [2024-05-16 09:48:33.648547] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.297 [2024-05-16 09:48:33.652099] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.297 [2024-05-16 09:48:33.660882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.297 [2024-05-16 09:48:33.661452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.297 [2024-05-16 09:48:33.661748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.297 [2024-05-16 09:48:33.661758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.297 [2024-05-16 09:48:33.661765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.297 [2024-05-16 09:48:33.661985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.297 [2024-05-16 09:48:33.662209] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.297 [2024-05-16 09:48:33.662217] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.297 [2024-05-16 09:48:33.662224] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.297 [2024-05-16 09:48:33.665768] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.297 [2024-05-16 09:48:33.674751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.297 [2024-05-16 09:48:33.675266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.297 [2024-05-16 09:48:33.675602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.297 [2024-05-16 09:48:33.675612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.297 [2024-05-16 09:48:33.675619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.297 [2024-05-16 09:48:33.675839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.297 [2024-05-16 09:48:33.676060] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.297 [2024-05-16 09:48:33.676069] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.297 [2024-05-16 09:48:33.676076] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.297 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.298 [2024-05-16 09:48:33.679624] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.298 [2024-05-16 09:48:33.688622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.298 [2024-05-16 09:48:33.689274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.298 [2024-05-16 09:48:33.689588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.298 [2024-05-16 09:48:33.689601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.298 [2024-05-16 09:48:33.689611] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.298 [2024-05-16 09:48:33.689849] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.298 [2024-05-16 09:48:33.690077] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.298 [2024-05-16 09:48:33.690086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.298 [2024-05-16 09:48:33.690094] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.298 [2024-05-16 09:48:33.693681] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.298 [2024-05-16 09:48:33.702483] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.298 [2024-05-16 09:48:33.703013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.298 [2024-05-16 09:48:33.703349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.298 [2024-05-16 09:48:33.703360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.298 [2024-05-16 09:48:33.703367] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.298 [2024-05-16 09:48:33.703587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.298 [2024-05-16 09:48:33.703806] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.298 [2024-05-16 09:48:33.703814] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.298 [2024-05-16 09:48:33.703822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.298 [2024-05-16 09:48:33.707372] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.298 [2024-05-16 09:48:33.716371] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.298 [2024-05-16 09:48:33.716930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.298 [2024-05-16 09:48:33.717186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.298 [2024-05-16 09:48:33.717198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.298 [2024-05-16 09:48:33.717205] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.298 [2024-05-16 09:48:33.717425] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.298 [2024-05-16 09:48:33.717644] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.298 [2024-05-16 09:48:33.717652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.298 [2024-05-16 09:48:33.717659] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.298 [2024-05-16 09:48:33.721208] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.298 [2024-05-16 09:48:33.727922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:40.298 [2024-05-16 09:48:33.730207] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.298 [2024-05-16 09:48:33.730771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.298 [2024-05-16 09:48:33.731100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.298 [2024-05-16 09:48:33.731111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.298 [2024-05-16 09:48:33.731119] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.298 [2024-05-16 09:48:33.731339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.298 [2024-05-16 09:48:33.731558] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.298 [2024-05-16 09:48:33.731566] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.298 [2024-05-16 09:48:33.731573] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.298 [2024-05-16 09:48:33.735144] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.298 [2024-05-16 09:48:33.744152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.298 [2024-05-16 09:48:33.744728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.298 [2024-05-16 09:48:33.745088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.298 [2024-05-16 09:48:33.745099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.298 [2024-05-16 09:48:33.745106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.298 [2024-05-16 09:48:33.745325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.298 [2024-05-16 09:48:33.745543] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.298 [2024-05-16 09:48:33.745551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.298 [2024-05-16 09:48:33.745559] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.298 [2024-05-16 09:48:33.749188] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.298 [2024-05-16 09:48:33.757991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.298 [2024-05-16 09:48:33.758552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.298 [2024-05-16 09:48:33.758867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.298 [2024-05-16 09:48:33.758878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.298 [2024-05-16 09:48:33.758885] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.298 [2024-05-16 09:48:33.759111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.298 [2024-05-16 09:48:33.759348] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.298 [2024-05-16 09:48:33.759359] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.298 [2024-05-16 09:48:33.759366] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.298 [2024-05-16 09:48:33.762915] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.298 [2024-05-16 09:48:33.771926] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.298 [2024-05-16 09:48:33.772557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.298 [2024-05-16 09:48:33.772904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.298 [2024-05-16 09:48:33.772914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.298 [2024-05-16 09:48:33.772922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.298 [2024-05-16 09:48:33.773147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.298 [2024-05-16 09:48:33.773366] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.298 [2024-05-16 09:48:33.773374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.298 [2024-05-16 09:48:33.773381] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.298 [2024-05-16 09:48:33.776923] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.298 [2024-05-16 09:48:33.781576] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:40.298 [2024-05-16 09:48:33.781601] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:40.298 [2024-05-16 09:48:33.781610] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:40.298 [2024-05-16 09:48:33.781616] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:40.298 [2024-05-16 09:48:33.781621] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:40.298 [2024-05-16 09:48:33.781836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:40.298 [2024-05-16 09:48:33.781957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.298 [2024-05-16 09:48:33.781959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:40.298 [2024-05-16 09:48:33.785930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.298 [2024-05-16 09:48:33.786506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.298 [2024-05-16 09:48:33.786676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.298 [2024-05-16 09:48:33.786685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.298 [2024-05-16 09:48:33.786693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.298 [2024-05-16 09:48:33.786912] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.298 [2024-05-16 09:48:33.787138] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.298 [2024-05-16 09:48:33.787146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.298 [2024-05-16 09:48:33.787153] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.298 [2024-05-16 09:48:33.790699] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.298 [2024-05-16 09:48:33.799915] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.298 [2024-05-16 09:48:33.800469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.298 [2024-05-16 09:48:33.800786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.298 [2024-05-16 09:48:33.800796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.298 [2024-05-16 09:48:33.800803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.298 [2024-05-16 09:48:33.801023] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.298 [2024-05-16 09:48:33.801248] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.299 [2024-05-16 09:48:33.801256] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.299 [2024-05-16 09:48:33.801263] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.299 [2024-05-16 09:48:33.804806] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.299 [2024-05-16 09:48:33.813814] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.299 [2024-05-16 09:48:33.814475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.299 [2024-05-16 09:48:33.814892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.299 [2024-05-16 09:48:33.814905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.299 [2024-05-16 09:48:33.814915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.299 [2024-05-16 09:48:33.815168] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.299 [2024-05-16 09:48:33.815396] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.299 [2024-05-16 09:48:33.815405] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.299 [2024-05-16 09:48:33.815412] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.299 [2024-05-16 09:48:33.818963] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.299 [2024-05-16 09:48:33.827754] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.299 [2024-05-16 09:48:33.828374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.299 [2024-05-16 09:48:33.828713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.299 [2024-05-16 09:48:33.828726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.299 [2024-05-16 09:48:33.828736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.299 [2024-05-16 09:48:33.828977] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.299 [2024-05-16 09:48:33.829205] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.299 [2024-05-16 09:48:33.829213] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.299 [2024-05-16 09:48:33.829221] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.299 [2024-05-16 09:48:33.832784] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.299 [2024-05-16 09:48:33.841586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.299 [2024-05-16 09:48:33.842151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.299 [2024-05-16 09:48:33.842547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.299 [2024-05-16 09:48:33.842560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.299 [2024-05-16 09:48:33.842569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.299 [2024-05-16 09:48:33.842811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.299 [2024-05-16 09:48:33.843033] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.299 [2024-05-16 09:48:33.843041] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.299 [2024-05-16 09:48:33.843049] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.299 [2024-05-16 09:48:33.846605] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.561 [2024-05-16 09:48:33.855390] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.561 [2024-05-16 09:48:33.855925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.561 [2024-05-16 09:48:33.856242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.561 [2024-05-16 09:48:33.856252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.561 [2024-05-16 09:48:33.856260] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.561 [2024-05-16 09:48:33.856480] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.561 [2024-05-16 09:48:33.856699] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.561 [2024-05-16 09:48:33.856707] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.561 [2024-05-16 09:48:33.856722] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.561 [2024-05-16 09:48:33.860274] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.561 [2024-05-16 09:48:33.869258] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.561 [2024-05-16 09:48:33.869840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.561 [2024-05-16 09:48:33.870038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.561 [2024-05-16 09:48:33.870048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.561 [2024-05-16 09:48:33.870060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.561 [2024-05-16 09:48:33.870279] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.561 [2024-05-16 09:48:33.870497] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.561 [2024-05-16 09:48:33.870505] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.561 [2024-05-16 09:48:33.870512] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.561 [2024-05-16 09:48:33.874055] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.561 [2024-05-16 09:48:33.883249] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.561 [2024-05-16 09:48:33.883832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.561 [2024-05-16 09:48:33.884140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.561 [2024-05-16 09:48:33.884150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.561 [2024-05-16 09:48:33.884158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.561 [2024-05-16 09:48:33.884377] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.561 [2024-05-16 09:48:33.884595] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.561 [2024-05-16 09:48:33.884603] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.561 [2024-05-16 09:48:33.884610] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.561 [2024-05-16 09:48:33.888162] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.561 [2024-05-16 09:48:33.897152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.561 [2024-05-16 09:48:33.897761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.561 [2024-05-16 09:48:33.897991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.561 [2024-05-16 09:48:33.898003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.561 [2024-05-16 09:48:33.898013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.561 [2024-05-16 09:48:33.898260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.561 [2024-05-16 09:48:33.898483] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.561 [2024-05-16 09:48:33.898491] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.561 [2024-05-16 09:48:33.898503] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.561 [2024-05-16 09:48:33.902103] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.561 [2024-05-16 09:48:33.911104] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.561 [2024-05-16 09:48:33.911772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.561 [2024-05-16 09:48:33.912128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.561 [2024-05-16 09:48:33.912142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.561 [2024-05-16 09:48:33.912152] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.561 [2024-05-16 09:48:33.912390] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.561 [2024-05-16 09:48:33.912612] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.561 [2024-05-16 09:48:33.912620] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.561 [2024-05-16 09:48:33.912627] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.561 [2024-05-16 09:48:33.916180] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.561 [2024-05-16 09:48:33.924969] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.561 [2024-05-16 09:48:33.925633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.561 [2024-05-16 09:48:33.925982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.561 [2024-05-16 09:48:33.925995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.561 [2024-05-16 09:48:33.926005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.561 [2024-05-16 09:48:33.926251] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.561 [2024-05-16 09:48:33.926475] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.561 [2024-05-16 09:48:33.926483] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.561 [2024-05-16 09:48:33.926490] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.561 [2024-05-16 09:48:33.930039] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.561 [2024-05-16 09:48:33.938845] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.561 [2024-05-16 09:48:33.939421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.561 [2024-05-16 09:48:33.939750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.561 [2024-05-16 09:48:33.939772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.561 [2024-05-16 09:48:33.939786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.561 [2024-05-16 09:48:33.940078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.561 [2024-05-16 09:48:33.940301] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.561 [2024-05-16 09:48:33.940309] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.562 [2024-05-16 09:48:33.940316] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.562 [2024-05-16 09:48:33.943865] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.562 [2024-05-16 09:48:33.952656] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.562 [2024-05-16 09:48:33.953314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.562 [2024-05-16 09:48:33.953647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.562 [2024-05-16 09:48:33.953661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.562 [2024-05-16 09:48:33.953671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.562 [2024-05-16 09:48:33.953910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.562 [2024-05-16 09:48:33.954138] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.562 [2024-05-16 09:48:33.954148] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.562 [2024-05-16 09:48:33.954155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.562 [2024-05-16 09:48:33.957703] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.562 [2024-05-16 09:48:33.966494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.562 [2024-05-16 09:48:33.967173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.562 [2024-05-16 09:48:33.967373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.562 [2024-05-16 09:48:33.967385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.562 [2024-05-16 09:48:33.967395] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.562 [2024-05-16 09:48:33.967633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.562 [2024-05-16 09:48:33.967856] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.562 [2024-05-16 09:48:33.967864] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.562 [2024-05-16 09:48:33.967872] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.562 [2024-05-16 09:48:33.971427] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.562 [2024-05-16 09:48:33.980424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.562 [2024-05-16 09:48:33.981071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.562 [2024-05-16 09:48:33.981394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.562 [2024-05-16 09:48:33.981407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.562 [2024-05-16 09:48:33.981417] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.562 [2024-05-16 09:48:33.981656] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.562 [2024-05-16 09:48:33.981878] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.562 [2024-05-16 09:48:33.981886] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.562 [2024-05-16 09:48:33.981894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.562 [2024-05-16 09:48:33.985455] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.562 [2024-05-16 09:48:33.994250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.562 [2024-05-16 09:48:33.994758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.562 [2024-05-16 09:48:33.995097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.562 [2024-05-16 09:48:33.995112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.562 [2024-05-16 09:48:33.995121] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.562 [2024-05-16 09:48:33.995360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.562 [2024-05-16 09:48:33.995582] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.562 [2024-05-16 09:48:33.995590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.562 [2024-05-16 09:48:33.995597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.562 [2024-05-16 09:48:33.999148] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.562 [2024-05-16 09:48:34.008143] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.562 [2024-05-16 09:48:34.008696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.562 [2024-05-16 09:48:34.009044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.562 [2024-05-16 09:48:34.009060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.562 [2024-05-16 09:48:34.009069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.562 [2024-05-16 09:48:34.009288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.562 [2024-05-16 09:48:34.009506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.562 [2024-05-16 09:48:34.009514] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.562 [2024-05-16 09:48:34.009521] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.562 [2024-05-16 09:48:34.013067] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.562 [2024-05-16 09:48:34.022059] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.562 [2024-05-16 09:48:34.022714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.562 [2024-05-16 09:48:34.023050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.562 [2024-05-16 09:48:34.023069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.562 [2024-05-16 09:48:34.023079] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.562 [2024-05-16 09:48:34.023317] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.562 [2024-05-16 09:48:34.023540] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.562 [2024-05-16 09:48:34.023548] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.562 [2024-05-16 09:48:34.023556] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.562 [2024-05-16 09:48:34.027108] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.562 [2024-05-16 09:48:34.035907] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.562 [2024-05-16 09:48:34.036558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.562 [2024-05-16 09:48:34.036891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.562 [2024-05-16 09:48:34.036904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.562 [2024-05-16 09:48:34.036913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.562 [2024-05-16 09:48:34.037159] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.562 [2024-05-16 09:48:34.037382] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.562 [2024-05-16 09:48:34.037390] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.562 [2024-05-16 09:48:34.037398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.562 [2024-05-16 09:48:34.040948] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.562 [2024-05-16 09:48:34.049734] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.562 [2024-05-16 09:48:34.050380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.563 [2024-05-16 09:48:34.050713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.563 [2024-05-16 09:48:34.050727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.563 [2024-05-16 09:48:34.050736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.563 [2024-05-16 09:48:34.050975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.563 [2024-05-16 09:48:34.051204] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.563 [2024-05-16 09:48:34.051213] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.563 [2024-05-16 09:48:34.051220] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.563 [2024-05-16 09:48:34.054768] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.563 [2024-05-16 09:48:34.063551] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.563 [2024-05-16 09:48:34.064068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.563 [2024-05-16 09:48:34.064456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.563 [2024-05-16 09:48:34.064468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.563 [2024-05-16 09:48:34.064478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.563 [2024-05-16 09:48:34.064716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.563 [2024-05-16 09:48:34.064938] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.563 [2024-05-16 09:48:34.064947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.563 [2024-05-16 09:48:34.064954] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.563 [2024-05-16 09:48:34.068507] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.563 [2024-05-16 09:48:34.077497] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.563 [2024-05-16 09:48:34.078096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.563 [2024-05-16 09:48:34.078425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.563 [2024-05-16 09:48:34.078435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.563 [2024-05-16 09:48:34.078448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.563 [2024-05-16 09:48:34.078672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.563 [2024-05-16 09:48:34.078891] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.563 [2024-05-16 09:48:34.078899] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.563 [2024-05-16 09:48:34.078906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.563 [2024-05-16 09:48:34.082455] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.563 [2024-05-16 09:48:34.091449] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.563 [2024-05-16 09:48:34.091889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.563 [2024-05-16 09:48:34.092209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.563 [2024-05-16 09:48:34.092220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.563 [2024-05-16 09:48:34.092227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.563 [2024-05-16 09:48:34.092446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.563 [2024-05-16 09:48:34.092665] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.563 [2024-05-16 09:48:34.092672] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.563 [2024-05-16 09:48:34.092679] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.563 [2024-05-16 09:48:34.096225] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.563 [2024-05-16 09:48:34.105425] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.563 [2024-05-16 09:48:34.105958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.563 [2024-05-16 09:48:34.106266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.563 [2024-05-16 09:48:34.106276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.563 [2024-05-16 09:48:34.106283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.563 [2024-05-16 09:48:34.106502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.563 [2024-05-16 09:48:34.106720] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.563 [2024-05-16 09:48:34.106728] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.563 [2024-05-16 09:48:34.106735] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.563 [2024-05-16 09:48:34.110283] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.563 [2024-05-16 09:48:34.119304] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.826 [2024-05-16 09:48:34.119984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.826 [2024-05-16 09:48:34.120362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.826 [2024-05-16 09:48:34.120376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.826 [2024-05-16 09:48:34.120389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.826 [2024-05-16 09:48:34.120628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.826 [2024-05-16 09:48:34.120850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.826 [2024-05-16 09:48:34.120858] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.826 [2024-05-16 09:48:34.120866] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.826 [2024-05-16 09:48:34.124419] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.826 [2024-05-16 09:48:34.133215] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.826 [2024-05-16 09:48:34.133786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.826 [2024-05-16 09:48:34.134022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.826 [2024-05-16 09:48:34.134031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.826 [2024-05-16 09:48:34.134039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.826 [2024-05-16 09:48:34.134263] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.826 [2024-05-16 09:48:34.134482] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.826 [2024-05-16 09:48:34.134490] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.826 [2024-05-16 09:48:34.134497] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.826 [2024-05-16 09:48:34.138042] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.826 [2024-05-16 09:48:34.147032] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.826 [2024-05-16 09:48:34.147611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.826 [2024-05-16 09:48:34.147931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.826 [2024-05-16 09:48:34.147940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.826 [2024-05-16 09:48:34.147948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.826 [2024-05-16 09:48:34.148171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.826 [2024-05-16 09:48:34.148390] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.826 [2024-05-16 09:48:34.148397] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.826 [2024-05-16 09:48:34.148404] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.826 [2024-05-16 09:48:34.151943] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.826 [2024-05-16 09:48:34.160928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.826 [2024-05-16 09:48:34.161474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.826 [2024-05-16 09:48:34.161776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.826 [2024-05-16 09:48:34.161786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.826 [2024-05-16 09:48:34.161794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.827 [2024-05-16 09:48:34.162016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.827 [2024-05-16 09:48:34.162239] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.827 [2024-05-16 09:48:34.162247] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.827 [2024-05-16 09:48:34.162254] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.827 [2024-05-16 09:48:34.165794] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.827 [2024-05-16 09:48:34.174863] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.827 [2024-05-16 09:48:34.175515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.827 [2024-05-16 09:48:34.175858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.827 [2024-05-16 09:48:34.175871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.827 [2024-05-16 09:48:34.175881] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.827 [2024-05-16 09:48:34.176128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.827 [2024-05-16 09:48:34.176350] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.827 [2024-05-16 09:48:34.176360] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.827 [2024-05-16 09:48:34.176367] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.827 [2024-05-16 09:48:34.179912] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.827 [2024-05-16 09:48:34.188707] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.827 [2024-05-16 09:48:34.189261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.827 [2024-05-16 09:48:34.189460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.827 [2024-05-16 09:48:34.189469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.827 [2024-05-16 09:48:34.189477] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.827 [2024-05-16 09:48:34.189696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.827 [2024-05-16 09:48:34.189916] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.827 [2024-05-16 09:48:34.189923] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.827 [2024-05-16 09:48:34.189930] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.827 [2024-05-16 09:48:34.193475] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.827 [2024-05-16 09:48:34.202668] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.827 [2024-05-16 09:48:34.203236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.827 [2024-05-16 09:48:34.203563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.827 [2024-05-16 09:48:34.203572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.827 [2024-05-16 09:48:34.203579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.827 [2024-05-16 09:48:34.203798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.827 [2024-05-16 09:48:34.204020] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.827 [2024-05-16 09:48:34.204029] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.827 [2024-05-16 09:48:34.204036] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.827 [2024-05-16 09:48:34.207584] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.827 [2024-05-16 09:48:34.216570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.827 [2024-05-16 09:48:34.217180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.827 [2024-05-16 09:48:34.217567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.827 [2024-05-16 09:48:34.217581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.827 [2024-05-16 09:48:34.217590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.827 [2024-05-16 09:48:34.217829] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.827 [2024-05-16 09:48:34.218059] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.827 [2024-05-16 09:48:34.218069] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.827 [2024-05-16 09:48:34.218078] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.827 [2024-05-16 09:48:34.221628] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.827 [2024-05-16 09:48:34.230419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.827 [2024-05-16 09:48:34.231075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.827 [2024-05-16 09:48:34.231279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.827 [2024-05-16 09:48:34.231294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.827 [2024-05-16 09:48:34.231303] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.827 [2024-05-16 09:48:34.231542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.827 [2024-05-16 09:48:34.231764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.827 [2024-05-16 09:48:34.231772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.827 [2024-05-16 09:48:34.231780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.827 [2024-05-16 09:48:34.235343] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.827 [2024-05-16 09:48:34.244342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.827 [2024-05-16 09:48:34.244944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.827 [2024-05-16 09:48:34.245275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.827 [2024-05-16 09:48:34.245286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.827 [2024-05-16 09:48:34.245293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.827 [2024-05-16 09:48:34.245513] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.828 [2024-05-16 09:48:34.245731] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.828 [2024-05-16 09:48:34.245752] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.828 [2024-05-16 09:48:34.245759] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.828 [2024-05-16 09:48:34.249308] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.828 [2024-05-16 09:48:34.258297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.828 [2024-05-16 09:48:34.258824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.828 [2024-05-16 09:48:34.259169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.828 [2024-05-16 09:48:34.259179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.828 [2024-05-16 09:48:34.259187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.828 [2024-05-16 09:48:34.259405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.828 [2024-05-16 09:48:34.259623] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.828 [2024-05-16 09:48:34.259640] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.828 [2024-05-16 09:48:34.259646] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.828 [2024-05-16 09:48:34.263189] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.828 [2024-05-16 09:48:34.272182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.828 [2024-05-16 09:48:34.272807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.828 [2024-05-16 09:48:34.273083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.828 [2024-05-16 09:48:34.273097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.828 [2024-05-16 09:48:34.273107] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.828 [2024-05-16 09:48:34.273346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.828 [2024-05-16 09:48:34.273568] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.828 [2024-05-16 09:48:34.273576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.828 [2024-05-16 09:48:34.273584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.828 [2024-05-16 09:48:34.277137] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.828 [2024-05-16 09:48:34.286139] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.828 [2024-05-16 09:48:34.286681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.828 [2024-05-16 09:48:34.287001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.828 [2024-05-16 09:48:34.287010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.828 [2024-05-16 09:48:34.287018] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.828 [2024-05-16 09:48:34.287242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.828 [2024-05-16 09:48:34.287462] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.828 [2024-05-16 09:48:34.287469] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.828 [2024-05-16 09:48:34.287480] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.828 [2024-05-16 09:48:34.291026] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.828 [2024-05-16 09:48:34.300011] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.828 [2024-05-16 09:48:34.300429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.828 [2024-05-16 09:48:34.300762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.828 [2024-05-16 09:48:34.300771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.828 [2024-05-16 09:48:34.300779] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.828 [2024-05-16 09:48:34.300997] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.828 [2024-05-16 09:48:34.301221] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.828 [2024-05-16 09:48:34.301229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.828 [2024-05-16 09:48:34.301237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.828 [2024-05-16 09:48:34.304777] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.828 [2024-05-16 09:48:34.313971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.828 [2024-05-16 09:48:34.314642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.828 [2024-05-16 09:48:34.314974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.828 [2024-05-16 09:48:34.314986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.828 [2024-05-16 09:48:34.314996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.828 [2024-05-16 09:48:34.315243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.828 [2024-05-16 09:48:34.315466] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.828 [2024-05-16 09:48:34.315476] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.828 [2024-05-16 09:48:34.315484] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.828 [2024-05-16 09:48:34.319062] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.828 [2024-05-16 09:48:34.327853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.828 [2024-05-16 09:48:34.328407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.828 [2024-05-16 09:48:34.328741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.828 [2024-05-16 09:48:34.328754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.828 [2024-05-16 09:48:34.328763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.828 [2024-05-16 09:48:34.329002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.828 [2024-05-16 09:48:34.329234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.828 [2024-05-16 09:48:34.329243] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.829 [2024-05-16 09:48:34.329251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.829 [2024-05-16 09:48:34.332811] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.829 [2024-05-16 09:48:34.341828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.829 [2024-05-16 09:48:34.342387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.829 [2024-05-16 09:48:34.342721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.829 [2024-05-16 09:48:34.342735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.829 [2024-05-16 09:48:34.342745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.829 [2024-05-16 09:48:34.342983] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.829 [2024-05-16 09:48:34.343215] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.829 [2024-05-16 09:48:34.343226] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.829 [2024-05-16 09:48:34.343233] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.829 [2024-05-16 09:48:34.346780] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.829 [2024-05-16 09:48:34.355776] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.829 [2024-05-16 09:48:34.356415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.829 [2024-05-16 09:48:34.356763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.829 [2024-05-16 09:48:34.356777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.829 [2024-05-16 09:48:34.356786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.829 [2024-05-16 09:48:34.357025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.829 [2024-05-16 09:48:34.357254] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.829 [2024-05-16 09:48:34.357263] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.829 [2024-05-16 09:48:34.357271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.829 [2024-05-16 09:48:34.360822] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.829 [2024-05-16 09:48:34.369611] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.829 [2024-05-16 09:48:34.370157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.829 [2024-05-16 09:48:34.370386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.829 [2024-05-16 09:48:34.370399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.829 [2024-05-16 09:48:34.370408] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:40.829 [2024-05-16 09:48:34.370647] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:40.829 [2024-05-16 09:48:34.370869] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:40.829 [2024-05-16 09:48:34.370877] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:40.829 [2024-05-16 09:48:34.370884] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:40.829 [2024-05-16 09:48:34.374443] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:40.829 [2024-05-16 09:48:34.383436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.829 [2024-05-16 09:48:34.383976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.829 [2024-05-16 09:48:34.384258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:40.829 [2024-05-16 09:48:34.384269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:40.829 [2024-05-16 09:48:34.384278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:41.092 [2024-05-16 09:48:34.384499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:41.092 [2024-05-16 09:48:34.384720] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.092 [2024-05-16 09:48:34.384728] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.092 [2024-05-16 09:48:34.384736] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.092 [2024-05-16 09:48:34.388290] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.092 [2024-05-16 09:48:34.397282] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.092 [2024-05-16 09:48:34.397932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.092 [2024-05-16 09:48:34.398064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.092 [2024-05-16 09:48:34.398077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:41.092 [2024-05-16 09:48:34.398087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:41.092 [2024-05-16 09:48:34.398325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:41.092 [2024-05-16 09:48:34.398548] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.092 [2024-05-16 09:48:34.398556] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.092 [2024-05-16 09:48:34.398565] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.092 [2024-05-16 09:48:34.402114] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.092 [2024-05-16 09:48:34.411102] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.092 [2024-05-16 09:48:34.411829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.092 [2024-05-16 09:48:34.412226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.092 [2024-05-16 09:48:34.412241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:41.092 [2024-05-16 09:48:34.412250] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:41.092 [2024-05-16 09:48:34.412489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:41.092 [2024-05-16 09:48:34.412712] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.092 [2024-05-16 09:48:34.412721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.092 [2024-05-16 09:48:34.412728] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.092 [2024-05-16 09:48:34.416278] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.092 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:41.092 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:35:41.092 09:48:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:41.092 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:41.092 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.092 [2024-05-16 09:48:34.425060] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.092 [2024-05-16 09:48:34.425578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.092 [2024-05-16 09:48:34.425966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.092 [2024-05-16 09:48:34.425979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:41.092 [2024-05-16 09:48:34.425988] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:41.092 [2024-05-16 09:48:34.426234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:41.092 [2024-05-16 09:48:34.426457] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.092 [2024-05-16 09:48:34.426465] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.092 [2024-05-16 09:48:34.426473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.092 [2024-05-16 09:48:34.430016] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.092 [2024-05-16 09:48:34.439035] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.092 [2024-05-16 09:48:34.439719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.092 [2024-05-16 09:48:34.440058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.092 [2024-05-16 09:48:34.440072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:41.092 [2024-05-16 09:48:34.440082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:41.092 [2024-05-16 09:48:34.440320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:41.092 [2024-05-16 09:48:34.440541] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.092 [2024-05-16 09:48:34.440550] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.092 [2024-05-16 09:48:34.440558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.092 [2024-05-16 09:48:34.444110] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.092 [2024-05-16 09:48:34.453110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.092 [2024-05-16 09:48:34.453662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.092 [2024-05-16 09:48:34.453970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.092 [2024-05-16 09:48:34.453979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:41.092 [2024-05-16 09:48:34.453987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:41.092 [2024-05-16 09:48:34.454210] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:41.092 [2024-05-16 09:48:34.454431] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.092 [2024-05-16 09:48:34.454438] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.092 [2024-05-16 09:48:34.454445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.092 [2024-05-16 09:48:34.457987] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.092 09:48:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:41.092 09:48:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:41.092 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.092 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.092 [2024-05-16 09:48:34.466981] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.092 [2024-05-16 09:48:34.467518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.092 [2024-05-16 09:48:34.467852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.092 [2024-05-16 09:48:34.467861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:41.092 [2024-05-16 09:48:34.467869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:41.092 [2024-05-16 09:48:34.468092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:41.092 [2024-05-16 09:48:34.468312] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.092 [2024-05-16 09:48:34.468319] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.092 [2024-05-16 09:48:34.468326] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.092 [2024-05-16 09:48:34.468440] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:41.092 [2024-05-16 09:48:34.471865] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.092 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.092 09:48:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:41.092 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.092 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.092 [2024-05-16 09:48:34.480846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.092 [2024-05-16 09:48:34.481465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.092 [2024-05-16 09:48:34.481819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.092 [2024-05-16 09:48:34.481832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:41.092 [2024-05-16 09:48:34.481841] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:41.092 [2024-05-16 09:48:34.482087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:41.092 [2024-05-16 09:48:34.482309] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.093 [2024-05-16 09:48:34.482317] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.093 [2024-05-16 09:48:34.482325] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.093 [2024-05-16 09:48:34.485874] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.093 [2024-05-16 09:48:34.494725] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.093 [2024-05-16 09:48:34.495177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.093 [2024-05-16 09:48:34.495407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.093 [2024-05-16 09:48:34.495417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:41.093 [2024-05-16 09:48:34.495424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:41.093 [2024-05-16 09:48:34.495649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:41.093 [2024-05-16 09:48:34.495868] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.093 [2024-05-16 09:48:34.495876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.093 [2024-05-16 09:48:34.495883] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.093 [2024-05-16 09:48:34.499429] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.093 Malloc0 00:35:41.093 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.093 09:48:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:41.093 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.093 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.093 [2024-05-16 09:48:34.508621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.093 [2024-05-16 09:48:34.509188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.093 [2024-05-16 09:48:34.509573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.093 [2024-05-16 09:48:34.509587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:41.093 [2024-05-16 09:48:34.509596] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:41.093 [2024-05-16 09:48:34.509835] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:41.093 [2024-05-16 09:48:34.510066] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.093 [2024-05-16 09:48:34.510076] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.093 [2024-05-16 09:48:34.510083] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.093 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.093 09:48:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:41.093 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.093 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.093 [2024-05-16 09:48:34.513631] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.093 [2024-05-16 09:48:34.522409] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.093 [2024-05-16 09:48:34.522950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.093 [2024-05-16 09:48:34.523291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:41.093 [2024-05-16 09:48:34.523302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213e7d0 with addr=10.0.0.2, port=4420 00:35:41.093 [2024-05-16 09:48:34.523309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213e7d0 is same with the state(5) to be set 00:35:41.093 [2024-05-16 09:48:34.523529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e7d0 (9): Bad file descriptor 00:35:41.093 [2024-05-16 09:48:34.523747] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:41.093 [2024-05-16 09:48:34.523755] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:41.093 [2024-05-16 09:48:34.523762] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:41.093 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.093 09:48:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:41.093 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.093 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.093 [2024-05-16 09:48:34.527337] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.093 [2024-05-16 09:48:34.531719] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:41.093 [2024-05-16 09:48:34.531906] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:41.093 [2024-05-16 09:48:34.536346] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:41.093 09:48:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.093 09:48:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 531526 00:35:41.093 [2024-05-16 09:48:34.571489] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:51.096 00:35:51.096 Latency(us) 00:35:51.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.096 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:51.096 Verification LBA range: start 0x0 length 0x4000 00:35:51.096 Nvme1n1 : 15.01 8522.82 33.29 9475.67 0.00 7086.11 552.96 22063.79 00:35:51.096 =================================================================================================================== 00:35:51.096 Total : 8522.82 33.29 9475.67 0.00 7086.11 552.96 22063.79 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:51.096 rmmod nvme_tcp 00:35:51.096 rmmod nvme_fabrics 00:35:51.096 rmmod nvme_keyring 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 532543 ']' 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 532543 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 532543 ']' 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 532543 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 532543 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 532543' 00:35:51.096 killing process with pid 532543 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 532543 00:35:51.096 [2024-05-16 09:48:43.366885] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 532543 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:51.096 09:48:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.038 09:48:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:52.038 00:35:52.038 real 0m27.854s 00:35:52.038 user 1m3.401s 00:35:52.038 sys 0m7.065s 00:35:52.038 09:48:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:52.038 09:48:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:52.038 ************************************ 00:35:52.038 END TEST nvmf_bdevperf 00:35:52.038 ************************************ 00:35:52.300 09:48:45 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:52.300 09:48:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:52.300 09:48:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:52.300 09:48:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.300 ************************************ 00:35:52.300 START TEST nvmf_target_disconnect 00:35:52.300 ************************************ 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:52.300 * Looking for test storage... 00:35:52.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:35:52.300 09:48:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:58.894 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:58.894 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:58.894 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:58.894 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:58.894 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:59.154 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:59.154 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:59.154 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:59.154 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:59.154 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:59.154 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:59.154 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:59.154 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:59.154 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:59.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:59.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:35:59.415 00:35:59.415 --- 10.0.0.2 ping statistics --- 00:35:59.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.415 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:59.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:59.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:35:59.415 00:35:59.415 --- 10.0.0.1 ping statistics --- 00:35:59.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.415 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:59.415 ************************************ 00:35:59.415 START TEST nvmf_target_disconnect_tc1 00:35:59.415 ************************************ 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:59.415 EAL: No free 2048 kB hugepages reported on node 1 00:35:59.415 [2024-05-16 09:48:52.906111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.415 [2024-05-16 09:48:52.906493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:59.415 [2024-05-16 09:48:52.906507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245d1d0 with addr=10.0.0.2, port=4420 00:35:59.415 [2024-05-16 09:48:52.906532] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:59.415 [2024-05-16 09:48:52.906545] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:59.415 [2024-05-16 09:48:52.906552] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:35:59.415 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:59.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:59.415 Initializing NVMe Controllers 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:59.415 00:35:59.415 real 0m0.106s 00:35:59.415 user 0m0.043s 00:35:59.415 sys 0m0.062s 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:59.415 ************************************ 00:35:59.415 END TEST nvmf_target_disconnect_tc1 00:35:59.415 ************************************ 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:59.415 09:48:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:59.677 ************************************ 00:35:59.677 START TEST nvmf_target_disconnect_tc2 00:35:59.677 ************************************ 00:35:59.677 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:35:59.677 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:59.677 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:59.677 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:59.677 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:59.677 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:59.677 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=538578 00:35:59.677 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 538578 00:35:59.677 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:59.677 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 538578 ']' 00:35:59.677 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:59.677 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:59.677 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:59.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:59.677 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:59.677 09:48:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:59.677 [2024-05-16 09:48:53.053427] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:35:59.677 [2024-05-16 09:48:53.053505] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:59.677 EAL: No free 2048 kB hugepages reported on node 1 00:35:59.677 [2024-05-16 09:48:53.142909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:59.939 [2024-05-16 09:48:53.237875] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:59.939 [2024-05-16 09:48:53.237933] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:59.939 [2024-05-16 09:48:53.237942] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:59.939 [2024-05-16 09:48:53.237949] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:59.939 [2024-05-16 09:48:53.237955] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:59.939 [2024-05-16 09:48:53.238128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:35:59.939 [2024-05-16 09:48:53.238288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:35:59.939 [2024-05-16 09:48:53.238450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:35:59.939 [2024-05-16 09:48:53.238452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.512 Malloc0 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.512 [2024-05-16 09:48:53.921466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.512 [2024-05-16 09:48:53.961566] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:00.512 [2024-05-16 09:48:53.961891] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=538924 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:00.512 09:48:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:00.512 EAL: No free 2048 kB hugepages reported on node 1 00:36:03.095 09:48:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 538578 00:36:03.095 09:48:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:03.095 Read completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Read completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Read completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Read completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Read completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Read completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Read completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Read completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Read completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Write completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Write completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Read completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Write completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Write completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Write completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Write completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Read completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Read completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Read completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Write completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Write completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Read completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Write completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Read completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Write completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Write completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Read completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Read completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Write completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Read completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.095 Write completed with error (sct=0, sc=8) 00:36:03.095 starting I/O failed 00:36:03.096 Read completed with error (sct=0, sc=8) 00:36:03.096 starting I/O failed 00:36:03.096 [2024-05-16 09:48:55.994692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:03.096 [2024-05-16 09:48:55.995065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:55.995445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:55.995482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:55.995777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:55.996299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:55.996341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:55.996686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:55.996888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:55.996898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:55.997353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:55.997700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:55.997714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:55.998062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:55.998445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:55.998482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:55.998814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:55.999298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:55.999335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:55.999665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:55.999976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:55.999986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.000374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.000643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.000652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.000949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.001321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.001331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.001663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.001999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.002008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.002312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.002638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.002647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.002837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.003075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.003085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.003316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.003510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.003519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.003726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.004035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.004044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.004373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.004667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.004676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.004963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.005325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.005335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.005624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.005930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.005939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.006257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.006593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.006603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.006880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.007084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.007093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.007271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.007585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.007595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.007919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.008150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.008160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.008528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.008864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.008873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.009214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.009532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.009541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.009830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.010122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.010131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.010438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.010742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.010752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.096 [2024-05-16 09:48:56.010942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.011250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.096 [2024-05-16 09:48:56.011260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.096 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.011571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.011832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.011841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.012000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.012174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.012184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.012519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.012807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.012819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.013159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.013457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.013467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.013801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.014096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.014106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.014484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.014772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.014781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.015065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.015275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.015286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.015576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.015908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.015917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.016323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.016641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.016651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.016823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.017127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.017138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.017468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.017772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.017782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.018080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.018397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.018407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.018742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.019060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.019070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.019384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.019682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.019692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.019906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.020210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.020220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.020560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.020740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.020750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.021033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.021409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.021419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.021717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.022078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.022089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.022387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.022683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.022694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.023031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.023235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.023246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.023541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.023844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.023853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.024125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.024480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.024489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.024805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.025107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.025116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.025507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.025806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.025816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.026088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.026414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.026423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.026712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.027007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.027016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.027354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.027689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.027701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.028032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.028330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.028340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.097 qpair failed and we were unable to recover it. 00:36:03.097 [2024-05-16 09:48:56.028703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.028989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.097 [2024-05-16 09:48:56.028998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.029258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.029568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.029577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.029878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.030194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.030203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.030495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.030783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.030793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.030969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.031259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.031268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.031614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.031914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.031923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.032107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.032477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.032486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.032804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.033129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.033138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.033435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.033786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.033795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.033986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.034285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.034295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.034590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.034909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.034919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.035230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.035539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.035548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.035884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.036103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.036112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.036395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.036706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.036715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.037004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.037213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.037222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.037570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.037754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.037764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.038070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.038271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.038280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.038561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.038944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.038953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.039292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.039595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.039604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.039909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.040186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.040195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.040566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.040881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.040890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.041202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.041518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.041528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.041829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.042142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.042151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.042492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.042802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.042811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.042984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.043286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.043296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.043593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.043783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.043792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.044133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.044457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.044467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.044653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.044972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.044982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.045278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.045592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.045601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.045902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.046067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.046077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.098 qpair failed and we were unable to recover it. 00:36:03.098 [2024-05-16 09:48:56.046421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.098 [2024-05-16 09:48:56.046745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.046755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.047078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.047354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.047363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.047654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.047983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.047992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.048291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.048586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.048596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.048792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.049092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.049102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.049409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.049727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.049735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.050010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.050295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.050305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.050592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.050907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.050916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.051188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.051505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.051514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.051800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.052075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.052085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.052458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.052734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.052743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.052951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.053246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.053255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.053538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.053849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.053858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.054182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.054533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.054542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.054831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.055142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.055151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.055456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.055749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.055758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.056101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.056414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.056423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.056737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.056874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.056883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.057176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.057508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.057517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.057718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.057918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.057928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.058223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.058546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.058555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.058860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.059021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.059031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.059369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.059658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.059667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.059991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.060251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.060261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.060531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.060849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.060858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.061178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.061495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.061504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.061706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.062007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.062016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.062321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.062636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.062645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.099 qpair failed and we were unable to recover it. 00:36:03.099 [2024-05-16 09:48:56.063028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.099 [2024-05-16 09:48:56.063314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.063323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.063619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.063942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.063953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.064238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.064558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.064567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.064869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.065176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.065186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.065511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.065836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.065845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.066146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.066408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.066417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.066721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.067025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.067034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.067398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.067686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.067695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.068005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.068327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.068337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.068639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.068958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.068967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.069275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.069567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.069576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.069927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.070248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.070258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.070594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.070877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.070886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.071076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.071379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.071388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.071776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.072069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.072078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.072365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.072723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.072732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.073023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.073351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.073361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.073666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.073990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.073999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.074392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.074714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.074729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.075059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.075340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.075350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.075558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.075737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.075746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.076013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.076283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.076293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.076616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.076937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.076946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.077370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.077570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.100 [2024-05-16 09:48:56.077579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.100 qpair failed and we were unable to recover it. 00:36:03.100 [2024-05-16 09:48:56.077883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.078198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.078208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.078492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.078804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.078814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.079115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.079431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.079440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.079751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.080044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.080055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.080460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.080784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.080793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.081065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.081432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.081441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.081742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.082069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.082080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.082271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.082576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.082585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.082865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.083060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.083070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.083275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.083506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.083515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.083813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.084147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.084157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.084443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.084719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.084728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.085031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.085203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.085214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.085504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.085826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.085835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.086006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.086275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.086285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.086595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.086904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.086914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.087215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.087527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.087536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.087848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.088159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.088168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.088464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.088764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.088774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.089056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.089374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.089383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.089578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.089907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.089916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.090222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.090558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.090567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.090848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.091166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.091175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.091478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.091770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.091779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.092063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.092363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.092372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.092656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.092984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.092994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.093202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.093523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.093532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.093828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.094113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.094123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.101 [2024-05-16 09:48:56.094430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.094719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.101 [2024-05-16 09:48:56.094730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.101 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.095031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.095343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.095352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.095656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.095941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.095950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.096229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.096423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.096432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.096721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.097029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.097037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.097344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.097621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.097629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.097944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.098254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.098265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.098576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.098844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.098853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.099160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.099368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.099377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.099674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.099988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.099997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.100360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.100694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.100703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.101006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.101301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.101311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.101612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.101930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.101939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.102121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.102498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.102508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.102789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.103070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.103079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.103273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.103608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.103617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.103929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.104222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.104232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.104538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.104819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.104828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.105102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.105319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.105328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.105503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.105814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.105823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.106043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.106268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.106278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.106574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.106873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.106882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.107184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.107486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.107495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.107894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.108211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.108227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.108510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.108823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.108832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.109135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.109418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.109428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.109803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.110092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.110101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.110475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.110786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.110795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.111075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.111393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.102 [2024-05-16 09:48:56.111401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.102 qpair failed and we were unable to recover it. 00:36:03.102 [2024-05-16 09:48:56.111713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.111998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.112006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.112206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.112563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.112571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.112877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.113183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.113192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.113400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.113793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.113802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.114099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.114403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.114412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.114718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.114999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.115008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.115324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.115656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.115665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.115947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.116153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.116162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.116439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.116723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.116731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.117008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.117329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.117338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.117623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.117934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.117943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.118143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.118502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.118511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.118820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.119184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.119194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.119498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.119830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.119839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.120140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.120436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.120445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.120755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.121068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.121078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.121393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.121711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.121720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.122038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.122334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.122343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.122506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.122773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.122783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.123102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.123407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.123415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.123723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.123972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.123981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.124292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.124621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.124630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.124922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.125222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.125234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.125540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.125828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.125837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.126097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.126403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.126412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.126765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.126958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.126968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.127330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.127648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.127658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.127964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.128246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.128256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.128551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.128710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.128720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.103 qpair failed and we were unable to recover it. 00:36:03.103 [2024-05-16 09:48:56.128982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.103 [2024-05-16 09:48:56.129292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.129301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.129606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.129800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.129809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.130104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.130404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.130413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.130720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.131041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.131050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.131340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.131657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.131667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.131952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.132265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.132274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.132583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.132897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.132906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.133072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.133413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.133422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.133723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.134011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.134020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.134390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.134697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.134706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.135011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.135205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.135215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.135524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.135870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.135879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.136170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.136482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.136491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.136770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.137050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.137062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.137383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.137677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.137686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.137985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.138188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.138198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.138498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.138790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.138799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.139083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.139403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.139412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.139689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.140013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.140022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.140221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.140563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.140572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.140866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.141164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.141173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.141479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.141798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.141807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.142126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.142458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.142467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.142748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.143055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.143065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.143364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.143657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.143666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.143959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.144280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.144290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.104 qpair failed and we were unable to recover it. 00:36:03.104 [2024-05-16 09:48:56.144562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.144810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.104 [2024-05-16 09:48:56.144819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.145100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.145402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.145411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.145670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.145890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.145899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.146218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.146431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.146440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.146747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.147062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.147071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.147372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.147666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.147674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.147956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.148266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.148275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.148576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.148663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.148673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.148842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.149147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.149156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.149445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.149729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.149739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.149912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.150116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.150126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.150536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.150801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.150810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.151120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.151412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.151421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.151731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.152019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.152028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.152213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.152576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.152585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.152865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.153170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.153179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.153472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.153785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.153794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.154097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.154387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.154396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.154684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.155021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.155032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.155308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.155632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.155641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.155917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.156257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.156266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.156559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.156761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.156769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.157002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.157339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.157351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.157701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.157981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.157990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.158272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.158592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.158601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.105 qpair failed and we were unable to recover it. 00:36:03.105 [2024-05-16 09:48:56.158881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.159197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.105 [2024-05-16 09:48:56.159207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.159497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.159786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.159795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.160072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.160344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.160354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.160660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.160741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.160750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.161034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.161331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.161340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.161619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.161866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.161876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.162180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.162371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.162381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.162704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.163022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.163032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.163320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.163632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.163641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.163914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.164176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.164185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.164463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.164774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.164782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.165069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.165346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.165355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.165659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.165955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.165964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.166279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.166582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.166591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.166880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.167218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.167228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.167530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.167827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.167836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.168106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.168459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.168468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.168763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.169074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.169083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.169389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.169706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.169716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.170020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.170334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.170344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.170675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.170992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.171001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.171291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.171563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.171571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.171878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.172153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.172163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.172452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.172771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.172781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.173085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.173386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.173395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.173698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.174001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.174009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.174318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.174623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.174632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.174938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.175216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.175225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.175526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.175805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.175814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.176101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.176418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.176427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.106 qpair failed and we were unable to recover it. 00:36:03.106 [2024-05-16 09:48:56.176737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.106 [2024-05-16 09:48:56.177071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.177080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.177471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.177776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.177786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.178082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.178371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.178380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.178696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.179029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.179038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.179321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.179632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.179641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.179920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.180190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.180200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.180504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.180824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.180832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.181010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.181347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.181356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.181660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.181996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.182005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.182254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.182629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.182638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.182949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.183266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.183276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.183467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.183732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.183741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.184048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.184376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.184386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.184698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.185013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.185022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.185306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.185622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.185633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.185945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.186253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.186262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.186499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.186702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.186711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.186896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.187203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.187212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.187523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.187838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.187847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.188030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.188344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.188353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.188633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.188944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.188954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.189175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.189481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.189490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.189802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.190129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.190138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.190472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.190749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.190758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.191062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.191382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.191395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.191677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.191995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.192004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.192300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.192579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.192588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.192892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.193208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.193218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.193553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.193858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.193867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.194155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.194477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.107 [2024-05-16 09:48:56.194486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.107 qpair failed and we were unable to recover it. 00:36:03.107 [2024-05-16 09:48:56.194789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.195077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.195086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.195396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.195713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.195722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.196003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.196324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.196334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.196541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.196880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.196889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.197084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.197422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.197431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.197615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.197883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.197892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.198206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.198512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.198522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.198830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.199142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.199152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.199431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.199738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.199747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.200101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.200394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.200403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.200712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.201041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.201050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.201422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.201738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.201748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.202065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.202378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.202388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.202664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.202970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.202978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.203273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.203523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.203532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.203822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.204106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.204115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.204413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.204703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.204712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.205001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.205329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.205338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.205635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.205843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.205852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.206157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.206363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.206372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.206752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.206911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.206921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.207226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.207526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.207535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.207921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.208336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.208346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.208639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.208920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.208929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.209212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.209534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.209543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.209859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.210094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.210103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.210468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.210768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.210778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.211064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.211373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.211383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.211726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.212056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.212066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.108 qpair failed and we were unable to recover it. 00:36:03.108 [2024-05-16 09:48:56.212427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.212728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.108 [2024-05-16 09:48:56.212737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.213042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.213350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.213360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.213670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.213981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.213990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.214293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.214592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.214601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.214877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.215196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.215206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.215588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.215897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.215912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.216232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.216508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.216518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.216825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.217116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.217126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.217447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.217769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.217778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.218095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.218417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.218426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.218728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.219054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.219064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.219235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.219613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.219623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.219954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.220285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.220296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.220599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.220911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.220921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.221206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.221497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.221506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.221799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.222030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.222039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.222419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.222767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.222778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.223174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.223371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.223380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.223689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.224008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.224017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.224349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.224659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.224674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.224823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.225011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.225019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.225229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.225557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.225566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.225846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.226134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.226143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.226448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.226556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.226566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.226880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.227193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.227203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.109 qpair failed and we were unable to recover it. 00:36:03.109 [2024-05-16 09:48:56.227508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.109 [2024-05-16 09:48:56.227703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.227712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.227898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.228218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.228227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.228483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.228666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.228675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.228970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.229270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.229279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.229567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.229875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.229883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.230191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.230564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.230573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.230851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.231132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.231141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.231344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.231688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.231697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.231983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.232288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.232298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.232582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.232907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.232917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.233200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.233462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.233472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.233777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.234092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.234102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.234418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.234739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.234748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.235049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.235356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.235365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.235638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.235953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.235962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.236266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.236566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.236576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.236860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.237152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.237161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.237479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.237702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.237711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.237832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.238136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.238146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.238453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.238619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.238629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.238820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.239127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.239137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.239443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.239753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.239762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.239989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.240301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.240311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.240538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.240821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.240830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.241114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.241408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.241417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.241725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.241996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.242005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.242274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.242593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.242602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.242906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.243186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.243196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.243479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.243804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.243813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.244123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.244433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.110 [2024-05-16 09:48:56.244442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.110 qpair failed and we were unable to recover it. 00:36:03.110 [2024-05-16 09:48:56.244709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.244900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.244909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.245261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.245534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.245543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.245701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.246006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.246016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.246304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.246597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.246607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.246913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.247113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.247122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.247448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.247705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.247714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.248046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.248335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.248344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.248660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.248945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.248954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.249237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.249518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.249527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.249736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.250066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.250076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.250471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.250765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.250774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.251092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.251424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.251433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.251605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.251830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.251841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.252140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.252313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.252322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.252652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.252967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.252976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.253379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.253670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.253679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.254004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.254310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.254320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.254615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.254945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.254954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.255272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.255590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.255599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.255892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.256232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.256241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.256536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.256733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.256742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.257040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.257223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.257234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.257534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.257853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.257862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.258165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.258461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.258470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.258755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.259050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.259062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.259369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.259559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.259568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.259871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.260066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.260076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.260452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.260769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.260778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.261077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.261407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.261416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.111 [2024-05-16 09:48:56.261729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.262030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.111 [2024-05-16 09:48:56.262038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.111 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.262377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.262709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.262717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.262928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.263236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.263245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.263531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.263854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.263863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.264174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.264394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.264402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.264570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.264763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.264772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.264977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.265265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.265274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.265365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.265648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.265658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.265976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.266051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.266068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.266375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.266686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.266695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.267084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.267250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.267261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.267546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.267743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.267752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.267942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.268241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.268251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.268560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.268777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.268787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.269116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.269446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.269462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.269769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.270100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.270110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.270444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.270748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.270757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.270927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.271202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.271212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.271515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.271826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.271836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.272013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.272352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.272362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.272667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.272988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.272998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.273309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.273528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.273537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.273834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.274122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.274132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.274350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.274619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.274629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.274901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.275216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.275226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.275545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.275859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.275868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.276064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.276267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.276276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.276599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.276896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.112 [2024-05-16 09:48:56.276905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.112 qpair failed and we were unable to recover it. 00:36:03.112 [2024-05-16 09:48:56.277100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.277408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.277419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.277749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.277951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.277960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.278263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.278588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.278598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.278802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.279100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.279110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.279415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.279634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.279643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.279948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.280017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.280026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.280302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.280599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.280613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.280811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.280970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.280980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.281283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.281338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.281348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.281682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.281975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.281985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.282281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.282591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.282600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.282923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.283326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.283335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.283525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.283797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.283805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.284008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.284194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.284203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.284477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.284856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.284865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.285169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.285512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.285521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.285826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.286140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.286152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.286459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.286773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.286783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.287108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.287403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.287413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.287738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.288021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.288030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.288340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.288548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.288557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.288853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.289179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.289189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.289489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.289776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.289785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.290094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.290424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.290433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.290608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.290934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.290943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.291233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.291397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.291407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.291781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.291997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.292007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.292331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.292616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.292626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.292786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.293071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.293081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.113 [2024-05-16 09:48:56.293143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.293429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.113 [2024-05-16 09:48:56.293438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.113 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.293743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.294021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.294029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.294310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.294488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.294498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.294804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.295112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.295122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.295478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.295799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.295808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.296125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.296430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.296439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.296749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.297082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.297092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.297360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.297676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.297685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.298072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.298337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.298346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.298534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.298721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.298730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.299061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.299366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.299375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.299703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.299997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.300006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.300364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.300692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.300701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.301009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.301300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.301309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.301617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.301940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.301949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.302230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.302598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.302607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.302902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.303232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.303241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.303561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.303878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.303887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.304190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.304496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.304505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.304787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.305105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.305114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.305412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.305701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.305710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.306016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.306309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.306319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.306514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.306905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.306915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.307219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.307547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.307557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.307894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.308175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.308185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.308489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.308815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.308824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.114 qpair failed and we were unable to recover it. 00:36:03.114 [2024-05-16 09:48:56.309132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.309436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.114 [2024-05-16 09:48:56.309445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.309759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.310057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.310067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.310404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.310708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.310718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.310945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.311264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.311274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.311540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.311840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.311849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.312124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.312458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.312467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.312761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.313073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.313083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.313407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.313760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.313769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.314054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.314241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.314251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.314575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.314779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.314788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.315092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.315389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.315397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.315727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.316028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.316037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.316341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.316696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.316708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.317014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.317336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.317346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.317629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.317944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.317954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.318235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.318541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.318550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.318856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.319121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.319130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.319451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.319779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.319789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.320073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.320348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.320357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.320662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.320951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.320959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.321265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.321555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.321564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.321846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.322172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.322181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.322512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.322783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.322792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.323086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.323401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.323410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.323692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.323999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.324008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.324218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.324548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.324557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.324844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.325168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.325178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.325554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.325876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.325885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.326197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.326511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.326520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.326825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.327000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.327009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.115 qpair failed and we were unable to recover it. 00:36:03.115 [2024-05-16 09:48:56.327316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.115 [2024-05-16 09:48:56.327599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.327609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.327931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.328218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.328227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.328519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.328800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.328808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.329089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.329415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.329424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.329753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.330081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.330090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.330375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.330579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.330588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.330854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.331188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.331198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.331500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.331816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.331825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.332117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.332430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.332439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.332736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.333036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.333046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.333363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.333654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.333663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.333988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.334198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.334207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.334535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.334851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.334861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.335093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.335374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.335383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.335706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.336027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.336037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.336340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.336621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.336630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.336936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.337217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.337226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.337425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.337660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.337669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.338018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.338295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.338305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.338490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.338692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.338701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.339017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.339339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.339348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.339617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.339953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.339962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.340278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.340610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.340620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.340924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.341220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.341229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.341520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.341717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.341726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.341933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.342217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.342228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.342545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.342862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.342872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.343185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.343508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.343518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.343809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.343980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.343991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.344291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.344636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.116 [2024-05-16 09:48:56.344646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.116 qpair failed and we were unable to recover it. 00:36:03.116 [2024-05-16 09:48:56.344941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.345247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.345257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.345561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.345881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.345891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.346207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.346540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.346550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.346887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.347177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.347189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.347538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.347851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.347860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.348164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.348488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.348498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.348834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.349127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.349137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.349446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.349760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.349769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.350070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.350366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.350376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.350568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.350882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.350891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.351223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.351537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.351546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.351868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.352157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.352166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.352456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.352771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.352780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.353071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.353340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.353349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.353704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.353866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.353876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.354175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.354468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.354477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.354781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.355083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.355093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.355344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.355632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.355640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.355925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.356218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.356227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.356520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.356776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.356785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.357086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.357393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.357402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.357686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.357977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.357986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.358310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.358670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.358680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.358982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.359158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.359168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.359480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.359793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.359802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.360084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.360410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.360419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.360717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.361031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.361040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.361341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.361663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.361673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.361970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.362326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.362335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.117 [2024-05-16 09:48:56.362622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.362937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.117 [2024-05-16 09:48:56.362946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.117 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.363242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.363429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.363439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.363746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.364099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.364109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.364441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.364624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.364633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.364816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.365036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.365045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.365399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.365713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.365722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.366025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.366349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.366360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.366675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.366864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.366874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.367191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.367398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.367407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.367680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.367981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.367990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.368367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.368649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.368658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.368970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.369288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.369297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.369570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.369881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.369890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.370198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.370504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.370513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.370792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.371072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.371081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.371345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.371657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.371666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.371968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.372343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.372353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.372646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.372956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.372965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.373250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.373579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.373589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.373895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.374206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.374215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.374510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.374818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.374827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.375112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.375437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.375446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.375748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.376072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.376081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.376449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.376735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.376744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.377050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.377335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.377345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.377634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.377981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.377992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.118 qpair failed and we were unable to recover it. 00:36:03.118 [2024-05-16 09:48:56.378305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.378631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.118 [2024-05-16 09:48:56.378640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.378953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.379222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.379232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.379546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.379868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.379877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.380173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.380483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.380492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.380802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.381098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.381107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.381423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.381735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.381745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.382049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.382252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.382261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.382550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.382874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.382883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.383136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.383371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.383380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.383696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.384006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.384015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.384367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.384646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.384655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.384954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.385273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.385283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.385562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.385874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.385883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.386161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.386479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.386488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.386792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.387098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.387107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.387418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.387821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.387830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.388116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.388312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.388321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.388630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.388953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.388962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.389267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.389553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.389562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.389890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.390187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.390196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.390570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.390865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.390874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.391178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.391490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.391499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.391813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.392130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.392140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.392340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.392607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.392616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.392789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.393157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.393166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.393338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.393694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.393703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.394009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.394358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.394367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.394620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.394884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.394893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.395216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.395512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.395521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.119 [2024-05-16 09:48:56.395837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.396170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.119 [2024-05-16 09:48:56.396179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.119 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.396482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.396816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.396825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.397110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.397424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.397434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.397625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.397826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.397835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.398157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.398365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.398374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.398715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.398998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.399007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.399304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.399521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.399530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.399828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.400075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.400084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.400463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.400725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.400734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.401038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.401355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.401364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.401648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.401855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.401864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.402166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.402498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.402507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.402707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.402897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.402906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.403110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.403479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.403488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.403772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.404082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.404091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.404388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.404733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.404742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.405020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.405338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.405347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.405651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.405984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.405994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.406288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.406580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.406589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.406837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.407134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.407143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.407480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.407763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.407772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.408076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.408390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.408401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.408680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.408958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.408967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.409259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.409574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.409583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.409886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.410174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.410183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.410359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.410630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.410639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.410940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.411201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.411211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.411506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.411816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.411825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.412134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.412436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.412445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.120 [2024-05-16 09:48:56.412755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.413068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.120 [2024-05-16 09:48:56.413077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.120 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.413390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.413709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.413718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.414024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.414367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.414379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.414572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.414914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.414923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.415334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.415621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.415630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.415908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.416167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.416176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.416480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.416780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.416788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.417092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.417410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.417419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.417715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.418009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.418017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.418393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.418693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.418702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.419008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.419324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.419334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.419641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.419954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.419963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.420161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.420451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.420460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.420790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.421082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.421091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.421465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.421802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.421811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.422095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.422442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.422451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.422753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.423064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.423074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.423406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.423706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.423716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.424015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.424216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.424225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.424529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.424852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.424861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.425157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.425488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.425498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.425781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.426063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.426072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.426382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.426597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.426606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.426897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.427192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.427202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.427388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.427703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.427712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.427917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.428216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.428226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.428392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.428724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.428733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.429016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.429193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.429202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.429534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.429842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.429851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.430127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.430333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.430343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.121 qpair failed and we were unable to recover it. 00:36:03.121 [2024-05-16 09:48:56.430667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.430975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.121 [2024-05-16 09:48:56.430984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.431316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.431593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.431603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.431897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.432207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.432216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.432498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.432753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.432762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.433068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.433385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.433394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.433765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.434069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.434078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.434289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.434480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.434489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.434870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.435201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.435211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.435480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.435805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.435815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.436125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.436467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.436476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.436681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.436979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.436987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.437298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.437603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.437612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.437993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.438288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.438298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.438601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.438796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.438805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.439088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.439395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.439419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.439773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.440105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.440120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.440425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.440783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.440792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.441007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.441327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.441337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.441627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.441948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.441957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.442314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.442596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.442605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.442924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.443150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.443160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.443440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.443723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.443733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.444046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.444391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.444400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.444582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.444773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.444787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.444991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.445276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.445286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.445600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.445802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.445811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.446003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.446293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.446303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.446583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.446933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.446942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.447229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.447550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.447559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.447867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.448172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.448182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.122 qpair failed and we were unable to recover it. 00:36:03.122 [2024-05-16 09:48:56.448469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.122 [2024-05-16 09:48:56.448761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.448770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.449058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.449362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.449371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.449679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.450008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.450017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.450223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.450569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.450578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.450886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.451195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.451205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.451533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.451844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.451853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.452032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.452328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.452338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.452676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.452992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.453004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.453301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.453614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.453623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.453914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.454223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.454234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.454557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.454856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.454865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.455190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.455520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.455529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.455848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.456162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.456171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.456470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.456790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.456799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.457078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.457258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.457268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.457567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.457869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.457879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.458057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.458283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.458292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.458624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.458965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.458974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.459256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.459561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.459570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.459875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.460189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.460198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.460505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.460860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.460869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.461181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.461467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.461476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.461783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.462113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.462122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.462480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.462759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.462768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.463177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.463467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.463476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.463753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.464065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.464074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.464365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.464665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.464674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.123 qpair failed and we were unable to recover it. 00:36:03.123 [2024-05-16 09:48:56.464860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.465112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.123 [2024-05-16 09:48:56.465122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.465432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.465755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.465764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.465981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.466264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.466273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.466591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.466895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.466904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.467235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.467564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.467572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.467868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.468165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.468174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.468471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.468671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.468680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.468969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.469300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.469309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.469595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.469916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.469926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.470316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.470614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.470623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.470910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.471110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.471119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.471400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.471724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.471733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.472038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.472340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.472349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.472542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.472894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.472903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.473234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.473546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.473554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.473842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.474163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.474172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.474353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.474557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.474566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.474871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.475182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.475194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.475527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.475916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.475925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.476261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.476587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.476596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.476892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.477179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.477189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.477496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.477777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.477787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.478114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.478449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.478459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.478799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.478994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.479004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.479274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.479571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.479580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.479886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.480074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.480083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.480407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.480695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.480703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.481009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.481323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.481332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.481627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.481911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.481920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.482205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.482531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.482540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.482743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.482967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.482975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.483292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.483572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.483581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.483898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.484191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.124 [2024-05-16 09:48:56.484201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.124 qpair failed and we were unable to recover it. 00:36:03.124 [2024-05-16 09:48:56.484505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.484801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.484810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.485092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.485452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.485461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.485764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.486091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.486100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.486310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.486476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.486485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.486716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.486985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.486994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.487188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.487530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.487539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.487821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.488005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.488014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.488373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.488720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.488729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.489025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.489344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.489354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.489654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.489975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.489985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.490335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.490661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.490670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.490975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.491264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.491274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.491565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.491831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.491840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.492132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.492424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.492433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.492740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.493067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.493076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.493371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.493714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.493723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.493999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.494332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.494341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.494537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.494805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.494814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.495117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.495416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.495425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.495706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.496023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.496033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.496222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.496547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.496557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.496752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.497063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.497073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.497279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.497603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.497612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.497971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.498216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.498225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.498387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.498666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.498675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.498975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.499190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.499200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.499546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.499766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.499775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.500109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.500397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.500406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.500712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.500904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.500913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.501202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.501522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.501531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.501695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.501966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.501975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.125 qpair failed and we were unable to recover it. 00:36:03.125 [2024-05-16 09:48:56.502259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.125 [2024-05-16 09:48:56.502574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.502583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.502889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.503092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.503102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.503405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.503699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.503708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.503947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.504270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.504279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.504591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.504918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.504929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.505216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.505429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.505438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.505610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.505912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.505921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.506221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.506550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.506559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.506767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.507098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.507108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.507442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.507731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.507740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.508036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.508362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.508372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.508678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.509026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.509034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.509335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.509648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.509657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.509988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.510198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.510208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.510544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.510859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.510871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.510954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.511138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.511149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.511362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.511552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.511560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.511957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.512144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.512154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.512453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.512504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.512514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.512685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.512984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.512993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.513325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.513540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.513549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.513835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.514172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.514181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.514474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.514800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.514808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.515116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.515394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.515404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.515723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.515793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.515803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.516097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.516400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.516409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.516724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.517084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.517093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.517431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.517727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.517736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.518039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.518322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.518332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.126 qpair failed and we were unable to recover it. 00:36:03.126 [2024-05-16 09:48:56.518519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.126 [2024-05-16 09:48:56.518844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.518853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.519026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.519335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.519345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.519640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.519950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.519960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.520279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.520583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.520592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.520902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.521090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.521099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.521302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.521563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.521572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.521859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.522216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.522226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.522425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.522760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.522769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.522978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.523301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.523311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.523611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.523924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.523933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.524262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.524476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.524485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.524808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.525122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.525132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.525402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.525721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.525731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.526011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.526290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.526308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.526614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.526807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.526816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.527235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.527566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.527575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.527968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.528192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.528202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.528480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.528661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.528670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.529038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.529353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.529362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.529672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.529935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.529945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.530229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.530531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.530540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.530812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.531113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.531122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.531453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.531769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.531778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.532085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.532395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.532404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.532714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.532906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.532915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.533243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.533549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.533558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.533870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.534081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.534091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.534449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.534768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.534777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.127 qpair failed and we were unable to recover it. 00:36:03.127 [2024-05-16 09:48:56.535167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.127 [2024-05-16 09:48:56.535394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.535403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.535697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.536002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.536011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.536202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.536522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.536531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.536933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.537258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.537267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.537603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.537930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.537939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.538129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.538453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.538462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.538773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.538942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.538951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.539256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.539586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.539596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.539906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.540205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.540216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.540536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.540861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.540870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.541181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.541482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.541491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.541795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.542087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.542096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.542291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.542614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.542623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.542902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.543090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.543100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.543441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.543743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.543753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.543956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.544252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.544262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.544573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.544885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.544894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.545183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.545367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.545377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.545671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.545998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.546007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.546317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.546494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.546503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.546773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.547094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.547104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.547388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.547716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.547726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.547939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.548242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.548251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.548558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.548895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.548904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.549212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.549412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.549421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.549677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.549826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.549835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.550032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.550294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.550303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.550619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.550931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.550940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.551212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.551508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.551517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.551813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.552136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.552146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.552474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.552765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.552774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.552949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.553212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.553222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.128 qpair failed and we were unable to recover it. 00:36:03.128 [2024-05-16 09:48:56.553523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.128 [2024-05-16 09:48:56.553749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.553758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.554064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.554382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.554391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.554715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.555029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.555039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.555362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.555567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.555576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.555887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.556198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.556208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.556495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.556830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.556839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.557116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.557442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.557451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.557617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.557815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.557824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.558118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.558416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.558425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.558752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.559061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.559071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.559291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.559467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.559475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.559784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.560101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.560111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.560417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.560729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.560738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.561028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.561332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.561341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.561650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.561943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.561951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.562142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.562506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.562515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.562699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.562982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.562991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.563298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.563607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.563617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.563909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.564221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.564230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.564543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.564856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.564865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.565182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.565392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.565400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.565610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.565828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.565838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.566026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.566300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.566309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.566595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.566856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.566865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.567163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.567496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.567504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.567821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.568149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.568158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.568448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.568736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.568745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.569058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.569354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.569365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.569666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.569944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.569954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.570299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.570616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.570625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.570942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.571255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.571264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.571555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.571833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.571841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.129 [2024-05-16 09:48:56.572148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.572456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.129 [2024-05-16 09:48:56.572465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.129 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.572752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.573065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.573075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.573467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.573720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.573729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.573940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.574239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.574250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.574530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.574842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.574851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.575168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.575462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.575471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.575776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.576091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.576101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.576393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.576716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.576725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.576998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.577293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.577302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.577599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.577890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.577898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.578158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.578375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.578383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.578699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.579032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.579041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.579395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.579704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.579713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.580030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.580346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.580356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.580673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.580878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.580887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.581085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.581380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.581389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.581707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.581964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.581972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.582178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.582527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.582536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.582817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.583128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.583137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.583398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.583722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.583732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.584030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.584302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.584312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.584602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.584924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.584934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.585218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.585538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.585548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.585852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.586168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.586178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.586361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.586680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.586689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.586875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.587223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.587233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.587427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.587740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.587749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.587916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.588196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.588206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.588514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.588820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.588830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.589133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.589441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.589453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.589762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.590075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.590085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.590380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.590698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.590708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.591029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.591356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.591365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.591651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.591863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.130 [2024-05-16 09:48:56.591872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.130 qpair failed and we were unable to recover it. 00:36:03.130 [2024-05-16 09:48:56.592182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.592479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.592488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.592849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.593165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.593174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.593456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.593768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.593777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.594101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.594403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.594412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.594719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.595047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.595059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.595361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.595562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.595571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.595859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.596157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.596166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.596498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.596688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.596697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.597004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.597302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.597311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.597615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.597889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.597898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.598222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.598560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.598569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.598891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.599188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.599197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.599509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.599827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.599839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.600146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.600491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.600500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.600750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.601084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.601094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.601390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.601697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.601707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.602013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.602306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.602315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.602617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.602952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.602962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.603226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.603404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.603414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.603694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.603892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.603901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.604200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.604380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.604390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.604601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.604902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.604912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.605244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.605539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.605551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.605883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.606178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.606188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.606495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.606826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.606836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.607147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.607351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.607360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.607652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.607986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.607995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.608303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.608614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.608624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.608836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.609163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.609172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.609481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.609792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.609801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.609999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.610350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.610360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.610642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.610953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.610962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.131 qpair failed and we were unable to recover it. 00:36:03.131 [2024-05-16 09:48:56.611237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.131 [2024-05-16 09:48:56.611559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-05-16 09:48:56.611569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-05-16 09:48:56.611871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-05-16 09:48:56.612180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-05-16 09:48:56.612190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-05-16 09:48:56.612503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-05-16 09:48:56.612793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-05-16 09:48:56.612802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-05-16 09:48:56.613115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-05-16 09:48:56.613437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-05-16 09:48:56.613446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-05-16 09:48:56.613750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-05-16 09:48:56.614062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-05-16 09:48:56.614072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-05-16 09:48:56.614370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-05-16 09:48:56.614663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-05-16 09:48:56.614673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-05-16 09:48:56.614976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-05-16 09:48:56.615254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-05-16 09:48:56.615264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-05-16 09:48:56.615560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-05-16 09:48:56.615882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-05-16 09:48:56.615892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.132 [2024-05-16 09:48:56.616072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-05-16 09:48:56.616354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.132 [2024-05-16 09:48:56.616364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.132 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.616554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.616747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.616757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.617077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.617426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.617438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.617728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.618049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.618063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.620603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.620861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.620871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.621069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.621365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.621374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.621669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.621873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.621887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.622265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.622464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.622474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.622809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.623147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.623156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.623496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.623693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.623702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.623932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.624229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.624238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.624525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.624830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.624840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.625029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.625367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.625378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.625763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.626058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.626069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.626397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.626734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.626743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.627064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.627363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.627372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.627575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.627865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.627874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.628096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.628386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.628394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.628706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.628966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.628975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.629436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.629808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.629817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.630148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.630367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.630377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.630675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.630971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.500 [2024-05-16 09:48:56.630981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.500 qpair failed and we were unable to recover it. 00:36:03.500 [2024-05-16 09:48:56.631081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.631307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.631317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.631681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.631969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.631982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.632203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.632385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.632395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.632616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.632922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.632932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.633241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.633425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.633435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.633622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.633947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.633957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.634290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.634593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.634602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.634909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.635219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.635228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.635540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.635678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.635689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.635997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.636300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.636310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.636603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.636937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.636946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.637228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.637597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.637609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.637917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.638208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.638217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.638541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.638732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.638741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.639019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.639331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.639341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.639633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.639953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.639962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.640171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.640391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.640401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.640725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.641058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.641067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.641263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.641541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.641550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.641884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.642247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.642257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.642574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.642913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.642923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.643236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.643433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.643443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.643771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.644095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.644105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.644408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.644737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.644747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.645056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.645350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.645359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.645659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.645970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.645979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.646178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.646460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.646469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.646774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.647110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.647120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.647466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.647638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.647647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.647958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.648330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.648340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.648507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.648703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.648713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.649023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.649256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.649266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.649600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.649946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.649956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.650122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.650418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.650428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.650761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.650928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.650937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.651237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.651450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.651459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.651727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.652010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.652019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.652353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.652670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.652680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.652851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.653167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.653177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.653450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.653793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.653803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.654105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.654480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.654490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.654787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.654959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.654968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.655182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.655246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.655256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.655543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.655859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.655869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.656045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.656273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.656284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.656475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.656792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.656801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.656978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.657179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.657188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.657431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.657784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.657793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.658093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.658376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.658385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.658683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.658943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.658953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.659282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.659605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.659614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.659912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.660189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.660199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.660531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.660829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.660839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.661038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.661377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.661386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.661580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.661872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.661881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.662184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.662501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.662510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.501 [2024-05-16 09:48:56.662904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.663171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.501 [2024-05-16 09:48:56.663181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.501 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.663486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.663855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.663864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.664221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.664527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.664536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.664840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.665038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.665047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.665420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.665678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.665686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.665999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.666378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.666387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.666692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.667015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.667028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.667335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.667663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.667672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.667972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.668279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.668288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.668573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.668880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.668889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.669094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.669442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.669451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.669637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.669970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.669980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.670326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.670622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.670631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.670942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.671193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.671202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.671516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.671832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.671841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.672123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.672425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.672434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.672757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.673059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.673069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.673470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.673759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.673768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.674070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.674431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.674440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.674746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.675036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.675045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.675366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.675676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.675685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.675970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.676323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.676332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.676615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.676923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.676932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.677243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.677577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.677587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.677917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.678241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.678250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.678613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.678941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.678950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.679237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.679541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.679551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.679860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.680059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.680068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.680419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.680730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.680739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.681045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.681275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.681284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.681608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.681895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.681903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.682317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.682649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.682663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.682870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.683161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.683171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.683356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.683722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.683731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.684049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.684434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.684443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.684804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.685116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.685126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.685455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.685754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.685764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.685988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.686376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.686385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.686674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.686993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.687003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.687206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.687472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.687481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.687809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.688143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.688153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.688453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.688740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.688749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.689089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.689370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.689380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.689687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.689991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.690000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.690318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.690590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.690599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.690955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.691160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.691169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.691482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.691808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.691817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.692132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.692314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.692324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.692503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.692836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.692846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.693024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.693228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.693239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.693513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.693688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.693697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.502 qpair failed and we were unable to recover it. 00:36:03.502 [2024-05-16 09:48:56.694054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.694351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.502 [2024-05-16 09:48:56.694361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.694523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.694806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.694815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.695011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.695227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.695238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.695544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.695845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.695855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.696164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.696503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.696514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.696837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.697168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.697177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.697482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.697766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.697777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.697952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.698324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.698333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.698539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.698737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.698747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.699028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.699232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.699241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.699414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.699688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.699697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.700023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.700337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.700347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.700655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.700979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.700988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.701205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.701558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.701568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.701860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.702262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.702272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.702456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.702820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.702829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.703143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.703487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.703497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.703776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.704116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.704126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.704451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.704782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.704791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.705093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.705403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.705412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.705731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.706019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.706028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.706334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.706536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.706545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.706844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.707107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.707117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.707495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.707743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.707753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.708051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.708260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.708269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.708541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.708769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.708779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.709097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.709396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.709406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.709566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.709888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.709897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.710166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.710457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.710466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.710767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.710953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.710962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.711281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.711602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.711611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.711894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.712200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.712209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.712565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.712843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.712852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.713142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.713362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.713371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.713572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.713923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.713932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.714119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.714385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.714395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.714764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.715055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.715064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.715451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.715646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.715656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.715969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.716270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.716280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.716570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.716876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.716885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.717187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.717513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.717523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.717801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.718094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.718103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.718413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.718715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.718724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.719046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.719327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.719337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.719693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.720034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.720044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.720271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.720677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.720686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.721028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.721351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.721361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.721680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.722000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.722009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.722325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.722644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.722653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.722970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.723292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.723301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.723606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.723913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.723922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.724221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.724535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.724544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.503 qpair failed and we were unable to recover it. 00:36:03.503 [2024-05-16 09:48:56.724834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.503 [2024-05-16 09:48:56.725139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.725149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.725363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.725685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.725694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.726045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.726271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.726281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.726582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.726902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.726912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.727251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.727572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.727581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.727866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.728047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.728063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.728268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.728453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.728462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.728678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.728882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.728891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.728945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.729173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.729182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.729393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.729693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.729702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.730002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.730322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.730332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.730611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.730940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.730949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.731100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.731432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.731441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.731514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.731816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.731825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.731943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.732118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.732128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.732299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.732484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.732498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.732797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.733019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.733028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.733251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.733539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.733549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.733860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.734162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.734172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.734480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.734775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.734784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.735107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.735488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.735497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.735806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.736051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.736064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.736459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.736750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.736759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.737083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.737384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.737393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.737776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.738024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.738033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.738397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.738694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.738703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.739045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.739276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.739285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.739483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.739829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.739837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.740111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.740453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.740463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.740785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.741056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.741065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.741444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.741739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.741749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.741946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.742234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.742243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.742542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.742828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.742837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.743175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.743465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.743474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.743793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.744077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.744085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.744277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.744601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.744610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.744912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.745180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.745189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.745517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.745717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.745726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.745778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.746063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.746073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.746278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.746481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.746490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.746716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.747006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.747015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.747302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.747449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.747458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.747758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.748070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.748079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.748426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.748755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.748763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.748990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.749178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.749187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.749486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.749789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.749798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.750084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.750397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.750406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.750725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.751037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.751046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.751380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.751563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.751571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.751957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.752083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.752091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.752253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.752550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.752559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.752862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.753162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.753171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.753503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.753726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.753736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.754043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.754361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.754370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.754597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.754867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.754875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.755062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.755340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.755348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.755684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.756008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.756017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.504 qpair failed and we were unable to recover it. 00:36:03.504 [2024-05-16 09:48:56.756342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.504 [2024-05-16 09:48:56.756636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.756644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.756971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.757281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.757291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.757596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.757852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.757860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.758166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.758499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.758507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.758705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.758981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.758990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.759356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.759529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.759539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.759863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.760167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.760176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.760484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.760821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.760829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.761079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.761358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.761367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.761694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.761969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.761980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.762347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.762637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.762646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.762845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.763024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.763033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.763148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.763340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.763348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.763676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.763831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.763841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.764132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.764487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.764495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.764778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.765101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.765111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.765472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.765813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.765822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.766128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.766400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.766408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.766603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.766905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.766913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.767201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.767521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.767530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.767851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.768054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.768063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.768288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.768621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.768630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.768968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.769300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.769308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.769630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.769923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.769931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.770335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.770678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.770687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.771062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.771366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.771374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.771682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.771957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.771965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.772276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.772553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.772561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.772738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.773086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.773095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.773357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.773684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.773692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.773917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.774238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.774247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.774448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.774717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.774725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.775016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.775293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.775301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.775584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.775743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.775752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.776033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.776363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.776372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.776664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.776986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.776994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.777311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.777635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.777644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.777978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.778296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.778305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.778614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.778954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.778962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.779163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.779465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.779473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.779799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.780115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.780125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.780421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.780727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.780736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.781068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.781390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.781398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.781640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.781834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.781843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.782189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.782468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.782477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.782838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.783117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.783126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.783410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.783723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.783732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.783781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.784092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.784102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.784251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.784595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.784604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.784841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.785000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.785010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.785246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.785639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.785647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.785872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.786049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.786060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.505 qpair failed and we were unable to recover it. 00:36:03.505 [2024-05-16 09:48:56.786416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.505 [2024-05-16 09:48:56.786714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.786723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:56.787030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.787343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.787352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:56.787633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.787969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.787978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:56.788161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.788456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.788465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:56.788773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.789063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.789073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:56.789433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.789732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.789741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:56.789936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.790250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.790260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:56.790571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.790898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.790907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:56.791137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.791409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.791420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:56.791699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.792036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.792045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:56.792248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.792570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.792579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:56.792669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.792935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.792944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:56.793123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.793313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.793322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:56.793536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.793846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.793855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:56.794048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.794387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:56.794396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.004921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.005344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.005365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.005711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.006107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.006143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.006550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.006910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.006925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.007389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.007823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.007841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.008374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.008761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.008780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.009226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.009577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.009592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.009937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.010256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.010270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.010634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.010973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.010985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.011258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.011557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.011570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.011771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.012066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.012080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.012351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.012659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.012672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.012992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.013231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.013244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.013570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.013857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.013869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.014100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.014460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.014473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.014823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.015123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.015136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.015460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.015802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.015815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.016132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.016361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.016375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.016714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.017038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.017050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.017367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.017701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.017713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.018030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.018379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.018391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.018716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.019032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.019045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.019354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.019684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.019697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.020044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.020401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.020413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.020734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.021041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.021059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.021350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.021558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.021569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.021900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.022103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.022115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.022433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.022767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.022780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.023102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.023465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.023477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.023793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.024066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.024077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.024332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.024663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.024675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.025016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.025370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.025382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.025715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.025991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.026003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.026318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.026650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.026662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.026983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.027292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.027305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.027628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.027970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.027984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.028198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.028543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.028557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.028876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.029165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.029178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.029395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.029720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.029733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.030084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.030420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.030432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.030748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.031081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.031094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.506 [2024-05-16 09:48:57.031343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.031657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.506 [2024-05-16 09:48:57.031671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.506 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.031993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.032862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.032904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.033264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.033595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.033609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.033931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.034291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.034305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.034629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.034976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.034996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.035294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.035624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.035639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.035949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.036282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.036296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.036616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.036937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.036950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.037307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.037632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.037645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.038033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.038430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.038445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.038793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.039116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.039131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.039353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.039632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.039646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.040056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.040418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.040432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.040622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.040950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.040964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.041312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.041636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.041654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.041995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.042535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.042554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.042902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.043237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.043254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.043608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.043823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.043836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.044236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.044584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.044596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.044944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.045269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.045284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.045607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.045929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.045941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.046177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.046392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.046405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.046766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.047084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.047098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.047426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.047756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.047769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.048112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.048471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.048484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.048826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.049131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.049144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.049469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.049763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.049775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.050097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.050430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.050443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.050800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.051118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.051131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.051336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.051640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.794 [2024-05-16 09:48:57.051653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.794 qpair failed and we were unable to recover it. 00:36:03.794 [2024-05-16 09:48:57.051996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.052241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.052253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.052573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.052887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.052899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.053232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.053583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.053596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.053921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.054234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.054248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.054563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.054682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.054694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.054986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.055215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.055227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.055453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.055789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.055801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.056218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.056551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.056564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.056878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.057197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.057209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.057551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.057867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.057879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.058120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.058478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.058492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.058671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.058980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.058992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.059236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.059581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.059593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.059910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.060285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.060297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.060689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.061046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.061062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.061286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.061621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.061633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.061816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.062231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.062244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.062576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.062821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.062833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.063086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.063402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.063416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.063757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.064113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.064126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.064476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.064783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.064797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.065029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.065399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.065413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.065734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.066071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.066084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.066315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.066685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.066697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.067003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.067333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.067346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.067675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.068039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.068055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.068305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.068642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.795 [2024-05-16 09:48:57.068654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.795 qpair failed and we were unable to recover it. 00:36:03.795 [2024-05-16 09:48:57.069057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.069177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.069187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.069506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.069837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.069849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.070093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.070452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.070464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.070886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.071134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.071146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.071425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.071784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.071798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.072140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.072464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.072478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.072821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.073128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.073141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.073480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.073810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.073823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.074025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.074357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.074372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.074678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.075012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.075024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.075397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.075696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.075708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.076056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.076400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.076412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.076770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.077118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.077132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.077454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.077744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.077757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.078102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.078477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.078490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.078780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.079133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.079146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.079484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.079770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.079782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.080146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.080480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.080492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.080832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.081060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.081072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.081441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.081763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.081775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.081957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.082154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.082167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.082505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.082828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.082841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.082960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.083256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.083268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.083578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.084014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.084026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.084365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.084689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.084701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.085027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.085264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.085278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.085602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.085933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.085946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.086239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.086593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.086604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.086932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.087129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.087142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.796 [2024-05-16 09:48:57.087482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.087801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.796 [2024-05-16 09:48:57.087813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.796 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.088130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.088519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.088530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.088850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.089170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.089184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.089520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.089837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.089850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.090089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.090285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.090298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.090636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.090969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.090980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.091378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.091701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.091713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.092069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.092391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.092403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.092745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.092936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.092948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.093167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.093496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.093508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.093727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.094036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.094049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.094393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.094714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.094728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.095121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.095326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.095338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.095649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.095984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.095996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.096344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.096544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.096556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.096887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.097136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.097149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.097502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.097836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.097848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.098091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.098304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.098317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.098649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.098860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.098872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.099134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.099489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.099501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.099847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.100178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.100192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.100531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.100869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.100883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.101235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.101547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.101560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.101900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.102201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.102214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.102563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.102806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.102819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.103034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.103382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.103395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.103735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.104101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.104114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.104411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.104628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.104640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.797 qpair failed and we were unable to recover it. 00:36:03.797 [2024-05-16 09:48:57.104952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.797 [2024-05-16 09:48:57.105271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.105283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.105501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.105781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.105793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.106130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.106489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.106504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.106868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.107191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.107203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.107532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.107857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.107869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.108212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.108430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.108442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.108761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.108943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.108956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.109293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.109574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.109586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.109880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.110175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.110187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.110547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.110823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.110837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.111039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.111261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.111273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.111616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.111924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.111937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.112144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.112456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.112468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.112791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.113136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.113148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.113365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.113667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.113680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.113990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.114124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.114138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.114359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.114682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.114695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.115011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.115325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.115338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.115631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.115969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.115981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.116307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.116412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.116424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.116640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.116826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.116841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.117040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.117364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.117376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.117689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.118035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.118047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.118368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.118687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.118700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.798 [2024-05-16 09:48:57.119044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.119166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.798 [2024-05-16 09:48:57.119175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.798 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.119524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.119867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.119879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.120073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.120325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.120337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.120670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.120913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.120925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.121030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.121334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.121346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.121667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.121995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.122007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.122314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.122537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.122549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.122873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.123121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.123134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.123444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.123765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.123777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.124116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.124459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.124471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.124785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.125024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.125036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.125443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.125758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.125770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.126068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.126424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.126436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.126777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.127056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.127068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.127385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.127708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.127721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.128101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.128455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.128469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.128792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.129001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.129013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.129200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.129472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.129484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.129807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.130003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.130016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.130328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.130649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.130661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.131001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.131344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.131356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.131675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.132003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.132016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.132350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.132660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.132673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.133026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.133264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.133276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.133493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.133837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.133848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.134187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.134525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.134536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.134746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.135095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.135107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.135444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.135788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.135800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.136110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.136457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.136469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.136798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.137127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.137142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.799 qpair failed and we were unable to recover it. 00:36:03.799 [2024-05-16 09:48:57.137445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.799 [2024-05-16 09:48:57.137764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.137776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.137975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.138294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.138307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.138661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.138997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.139009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.139407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.139734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.139746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.140065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.140398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.140410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.140729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.140914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.140926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.141245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.141594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.141607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.141832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.142189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.142202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.142517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.142830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.142843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.143086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.143427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.143440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.143785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.144104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.144117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.144342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.144648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.144660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.145003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.145301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.145314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.145665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.145992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.146004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.146331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.146662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.146674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.147015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.147323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.147336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.147653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.147971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.147983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.148349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.148666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.148678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.149091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.149408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.149420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.149741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.150116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.150129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.150465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.150824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.150837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.151151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.151378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.151389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.151731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.152060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.152072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.152427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.152741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.152753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.153090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.153390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.153402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.153714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.154035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.154046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.154294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.154627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.154640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.154963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.155267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.155279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.155599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.155916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.155928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.156278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.156620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.800 [2024-05-16 09:48:57.156631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.800 qpair failed and we were unable to recover it. 00:36:03.800 [2024-05-16 09:48:57.156961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.157284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.157297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.157650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.158025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.158038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.158219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.158566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.158577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.158887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.159218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.159231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.159415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.159693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.159704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.159908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.160219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.160232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.160523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.160788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.160800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.161125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.161440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.161452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.161781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.162108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.162119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.162454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.162797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.162809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.163127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.163522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.163535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.163854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.164182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.164195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.164550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.164865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.164878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.165104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.165311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.165322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.165509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.165839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.165851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.166171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.166474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.166486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.166787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.167097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.167109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.167435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.167755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.167766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.168079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.168378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.168390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.168715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.169032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.169045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.169424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.169724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.169738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.169913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.170230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.170243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.170434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.170709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.170722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.171038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.171377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.171390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.171736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.172046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.172064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.172291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.172632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.172645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.172991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.173128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.173140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.173521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.173619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.173629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.173925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.174095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.174108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.801 [2024-05-16 09:48:57.174400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.174706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.801 [2024-05-16 09:48:57.174718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.801 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.174984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.175199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.175214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.175545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.175869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.175882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.176155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.176452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.176464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.176691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.176819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.176834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.177174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.177547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.177559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.177770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.178116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.178128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.178344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.178527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.178541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.178840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.179173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.179186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.179500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.179727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.179740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.180058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.180302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.180314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.180643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.180940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.180952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.181172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.181453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.181465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.181647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.181856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.181869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.182157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.182510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.182521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.182870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.183148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.183159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.183464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.183777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.183789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.184084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.184412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.184424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.184730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.185015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.185027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.185195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.185386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.185397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.185691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.186012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.186024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.186220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.186553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.186566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.186813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.187070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.187082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.187395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.187720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.187732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.188039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.188182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.188194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.188487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.188669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.188681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.188895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.189072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.189086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.189406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.189575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.189586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.189757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.190108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.190120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.190427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.190742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.190754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.802 [2024-05-16 09:48:57.191114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.191413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.802 [2024-05-16 09:48:57.191426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.802 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.191711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.192037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.192049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.192359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.192741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.192753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.193152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.193491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.193503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.193808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.194071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.194083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.194407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.194763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.194774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.195091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.195427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.195439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.195752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.196142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.196154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.196445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.196659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.196672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.196993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.197203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.197214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.197539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.197843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.197855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.198187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.198513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.198525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.198841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.199217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.199229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.199547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.199863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.199874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.200068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.200352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.200364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.200687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.201028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.201040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.201430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.201793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.201805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.202160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.202373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.202385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.202668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.202993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.203005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.203320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.203631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.203643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.203841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.204129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.204141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.204473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.204822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.204834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.205062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.205293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.205307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.205617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.205937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.205949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.206132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.206366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.206378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.206694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.207015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.207028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.803 qpair failed and we were unable to recover it. 00:36:03.803 [2024-05-16 09:48:57.207356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.803 [2024-05-16 09:48:57.207708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.207720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.208030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.208384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.208396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.208783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.209039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.209051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.209366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.209689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.209701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.210019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.210349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.210361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.210682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.210886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.210897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.211079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.211249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.211261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.211501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.211721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.211732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.212046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.212356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.212367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.212679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.213006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.213017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.213314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.213624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.213635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.213930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.214230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.214243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.214541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.214888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.214900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.215218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.215568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.215580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.215894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.216113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.216125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.216363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.216655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.216668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.216998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.217206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.217219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.217547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.217868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.217880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.218210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.218536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.218548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.218836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.219168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.219180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.219368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.219661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.219673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.219881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.220180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.220193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.220511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.220844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.220857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.221171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.221511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.221523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.221870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.222075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.222088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.222397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.222724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.222735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.223083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.223445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.223456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.223796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.223954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.223966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.224297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.224491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.224503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.224791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.225000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.804 [2024-05-16 09:48:57.225011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.804 qpair failed and we were unable to recover it. 00:36:03.804 [2024-05-16 09:48:57.225329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.225514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.225525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.225843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.226137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.226151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.226488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.226783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.226794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.227108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.227405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.227417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.227748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.228078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.228090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.228417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.228722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.228734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.229067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.229367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.229381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.229697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.230023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.230036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.230273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.230667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.230679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.231005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.231242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.231254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.231445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.231810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.231822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.232126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.232453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.232465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.232676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.233038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.233050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.233305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.233653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.233665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.233984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.234224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.234236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.234527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.234760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.234771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.235102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.235313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.235325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.235636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.235957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.235971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.236321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.236647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.236660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.236992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.237310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.237323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.237634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.237990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.238002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.238306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.238659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.238671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.239006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.239357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.239369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.239704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.240003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.240014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.240343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.240710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.240722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.241027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.241254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.241265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.241567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.241868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.241879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.242136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.242346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.242358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.242638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.242964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.242976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.805 [2024-05-16 09:48:57.243227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.243530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.805 [2024-05-16 09:48:57.243542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.805 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.243847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.244182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.244194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.244402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.244742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.244753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.245068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.245416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.245428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.245823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.246127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.246138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.246468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.246741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.246753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.247062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.247282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.247294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.247614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.247894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.247905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.248087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.248413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.248425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.248748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.248926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.248937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.249042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.249382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.249394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.249674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.249873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.249884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.250204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.250529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.250543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.250839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.251126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.251136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.251354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.251618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.251628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.251894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.252129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.252140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.252458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.252624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.252634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.252835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.253168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.253178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.253513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.253845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.253855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.254072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.254425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.254436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.254642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.254902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.254912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.255208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.255524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.255534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.255879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.256167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.256177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.256491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.256787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.256797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.257081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.257460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.257469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.257753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.258050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.258071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.258281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.258461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.258470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.258738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.258929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.258939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.259250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.259602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.259612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.259907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.260208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.260218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.806 qpair failed and we were unable to recover it. 00:36:03.806 [2024-05-16 09:48:57.260525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.260831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.806 [2024-05-16 09:48:57.260840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.261170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.261480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.261490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.261828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.262086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.262097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.262415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.262707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.262716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.262909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.263228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.263239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.263572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.263867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.263877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.264048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.264364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.264375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.264694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.265003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.265012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.265374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.265574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.265584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.265765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.265953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.265966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.266352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.266445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.266455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.266766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.267066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.267076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.267266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.267534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.267543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.267855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.268026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.268035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.268366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.268676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.268687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.268980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.269153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.269165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.269488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.269825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.269834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.270047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.270245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.270255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.270495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.270776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.270790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.271122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.271326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.271341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.271637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.271981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.271991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.272374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.272707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.272717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.273039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.273254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.273264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.273576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.273909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.273918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.274108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.274420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.274429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.274733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.275037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.807 [2024-05-16 09:48:57.275047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.807 qpair failed and we were unable to recover it. 00:36:03.807 [2024-05-16 09:48:57.275374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.275693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.275703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.276013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.276327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.276337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.276636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.276965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.276974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.277222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.277554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.277564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.277909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.278092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.278103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.278382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.278573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.278583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.278691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.279028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.279037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.279394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.279714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.279723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.280022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.280351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.280361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.280701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.281011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.281020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.281401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.281582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.281592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.281783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.282104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.282115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.282366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.282708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.282717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.283001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.283167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.283177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.283506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.283813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.283822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.284014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.284210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.284220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.284523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.284834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.284843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.285129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.285310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.285320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.285629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.285799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.285808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.286123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.286456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.286466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.286678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.286983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.286992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.287380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.287744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.287753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.288078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.288293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.288302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.288642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.288953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.288964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.289321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.289627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.289637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.289956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.290299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.290309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.290587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.290888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.290899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.291195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.291519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.291529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.291824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.292034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.292043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.292376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.292697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.808 [2024-05-16 09:48:57.292706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.808 qpair failed and we were unable to recover it. 00:36:03.808 [2024-05-16 09:48:57.292880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.293198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.293208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.293567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.293767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.293776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.294083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.294290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.294299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.294621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.294923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.294933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.295230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.295562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.295573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.295779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.295968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.295978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.296179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.296372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.296382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.296659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.296846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.296856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.297062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.297112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.297122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.297313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.297619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.297629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.297888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.298072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.298083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.298372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.298544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.298554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.298905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.299192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.299202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.299525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.299845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.299856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.300143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.300475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.300486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.300772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.301047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.301060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.301240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.301569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.301579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.301860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.302199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.302209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.302493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.302710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.302720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.303066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.303380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.303389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.303672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.303900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.303910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.304233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.304436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.304446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.304765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.305096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.305107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.305289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.305591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.305600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.305968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.306267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.306277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.306591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.306904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.306913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.307222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.307539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.307548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.307855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.308139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.308148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.308434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.308657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.308666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.809 qpair failed and we were unable to recover it. 00:36:03.809 [2024-05-16 09:48:57.308853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.309143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.809 [2024-05-16 09:48:57.309153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.309458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.309768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.309777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.310089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.310417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.310427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.310744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.310957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.310966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.311278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.311603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.311612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.311924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.312000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.312010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.312325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.312636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.312645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.312993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.313326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.313336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.313652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.313944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.313953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.314235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.314608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.314617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.314913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.315257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.315268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.315589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.315942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.315952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.316278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.316611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.316620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.316924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.317231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.317240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.317569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.317895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.317904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.318202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.318498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.318508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.318829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.319136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.319147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.319488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.319784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.319793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.320082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.320278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.320288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.320614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.320944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.320954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.321271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.321598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.321608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.321805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.322154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.322164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.322502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.322681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.322690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.322883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.323113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.323123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.323318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.323521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.323531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.323930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.324261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.324271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.324607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.324932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.324942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.325224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.325552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.325561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.325912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.326226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.326236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.326355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.326742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.326751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.810 qpair failed and we were unable to recover it. 00:36:03.810 [2024-05-16 09:48:57.326955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.810 [2024-05-16 09:48:57.327270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.327280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.327586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.327913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.327924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.328231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.328567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.328576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.328873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.329184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.329194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.329489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.329822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.329831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.330152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.330487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.330497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.330797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.331122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.331134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.331479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.331816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.331825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.332202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.332509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.332519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.332812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.333109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.333129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.333347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.333693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.333702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.334007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.334370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.334381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.334697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.335021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.335031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.335222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.335512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.335521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.335849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.336172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.336182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.336490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.336817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.336827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.337179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.337476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.337485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.337819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.338147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.338157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.338364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.338669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.338678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.338990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.339357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.339367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.339675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.339978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:03.811 [2024-05-16 09:48:57.339987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:03.811 qpair failed and we were unable to recover it. 00:36:03.811 [2024-05-16 09:48:57.340260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.340608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.340620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.095 qpair failed and we were unable to recover it. 00:36:04.095 [2024-05-16 09:48:57.340809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.341134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.341145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.095 qpair failed and we were unable to recover it. 00:36:04.095 [2024-05-16 09:48:57.342171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.342508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.342520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.095 qpair failed and we were unable to recover it. 00:36:04.095 [2024-05-16 09:48:57.342867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.343183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.343194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.095 qpair failed and we were unable to recover it. 00:36:04.095 [2024-05-16 09:48:57.343491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.343674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.343683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.095 qpair failed and we were unable to recover it. 00:36:04.095 [2024-05-16 09:48:57.343892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.344250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.344261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.095 qpair failed and we were unable to recover it. 00:36:04.095 [2024-05-16 09:48:57.344604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.344805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.344816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.095 qpair failed and we were unable to recover it. 00:36:04.095 [2024-05-16 09:48:57.345103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.345444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.345453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.095 qpair failed and we were unable to recover it. 00:36:04.095 [2024-05-16 09:48:57.345754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.346050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.346075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.095 qpair failed and we were unable to recover it. 00:36:04.095 [2024-05-16 09:48:57.346395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.346745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.346754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.095 qpair failed and we were unable to recover it. 00:36:04.095 [2024-05-16 09:48:57.347065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.347354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.347363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.095 qpair failed and we were unable to recover it. 00:36:04.095 [2024-05-16 09:48:57.347652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.348026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.348035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.095 qpair failed and we were unable to recover it. 00:36:04.095 [2024-05-16 09:48:57.348338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.348526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.095 [2024-05-16 09:48:57.348536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.095 qpair failed and we were unable to recover it. 00:36:04.095 [2024-05-16 09:48:57.348874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.349210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.349221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.349518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.349719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.349729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.350041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.350336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.350346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.350692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.350993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.351003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.351305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.351614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.351623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.351936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.352222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.352232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.352533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.352859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.352869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.353185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.353493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.353503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.353811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.353950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.353959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.354337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.354660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.354669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.354966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.355136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.355147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.355439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.355824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.355834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.356163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.356537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.356546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.356883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.357233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.357243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.357530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.357721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.357731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.358030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.358354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.358364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.358664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.358847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.358856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.359091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.359395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.359405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.359694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.360024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.360034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.360388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.360699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.360708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.361061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.361357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.361366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.361663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.361993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.362003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.362192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.362523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.362532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.362841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.363028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.363039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.363317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.363709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.363718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.364046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.364348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.364357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.364647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.364859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.364868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.365220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.365562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.365572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.365913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.366223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.366233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.096 qpair failed and we were unable to recover it. 00:36:04.096 [2024-05-16 09:48:57.366528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.096 [2024-05-16 09:48:57.366823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.366832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.367209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.367533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.367542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.367835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.368128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.368138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.368435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.368759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.368768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.369081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.369398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.369407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.369741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.370036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.370045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.370233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.370418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.370427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.370802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.371127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.371138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.371476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.371767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.371776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.372064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.372361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.372370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.372680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.372978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.372987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.373305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.373632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.373641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.373927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.374262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.374271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.374567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.374862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.374871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.375081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.375262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.375272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.375565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.375911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.375920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.376228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.376533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.376542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.376829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.377135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.377145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.377336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.377681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.377690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.377989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.378317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.378327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.378638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.378958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.378967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.379280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.379457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.379467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.379742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.380017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.380026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.380352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.380658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.380667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.380957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.381283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.381293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.381608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.381932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.381941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.382263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.382442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.382451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.382732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.383026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.383035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.383358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.383658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.383667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.383954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.384276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.097 [2024-05-16 09:48:57.384286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.097 qpair failed and we were unable to recover it. 00:36:04.097 [2024-05-16 09:48:57.384597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.384797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.384806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.385144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.385452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.385461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.385750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.386045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.386058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.386435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.386634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.386643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.386970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.387285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.387295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.387581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.387874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.387883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.388190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.388502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.388511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.388851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.389185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.389195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.389495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.389673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.389682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.389985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.390124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.390133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.390513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.390806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.390815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.391100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.391418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.391427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.391717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.392084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.392094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.392407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.392713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.392722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.393038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.393220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.393230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.393534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.393858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.393869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.394183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.394481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.394490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.394807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.395126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.395136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.395324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.395648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.395657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.396037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.396250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.396259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.396541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.396806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.396815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.397141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.397467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.397476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.397672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.398008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.398017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.398199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.398474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.398483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.398830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.399045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.399058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.399266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.399632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.399643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.399966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.400261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.400271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.400558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.400856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.400865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.401185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.401518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.098 [2024-05-16 09:48:57.401527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.098 qpair failed and we were unable to recover it. 00:36:04.098 [2024-05-16 09:48:57.401831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.402109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.402118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.402432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.402728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.402737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.403025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.403387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.403396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.403682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.403988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.403996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.404289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.404584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.404594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.404897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.405268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.405277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.405576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.405900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.405909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.406226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.406585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.406594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.406910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.407097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.407107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.407391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.407725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.407734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.408068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.408384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.408393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.408699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.408996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.409005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.409355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.409650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.409658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.409952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.410255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.410264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.410625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.410926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.410935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.411268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.411563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.411572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.411884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.412209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.412219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.412532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.412850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.412859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.413208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.413414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.413423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.413755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.414060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.414069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.414372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.414664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.414673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.414962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.415288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.415298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.415624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.415970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.415979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.416351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.416654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.416663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.416956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.417136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.417146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.417429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.417811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.417820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.417988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.418163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.418173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.099 qpair failed and we were unable to recover it. 00:36:04.099 [2024-05-16 09:48:57.418508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.418858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.099 [2024-05-16 09:48:57.418867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.419176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.419469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.419478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.419767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.420118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.420127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.420402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.420717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.420726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.421010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.421205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.421216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.421477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.421803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.421812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.421980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.422142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.422153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.422450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.422755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.422764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.423056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.423419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.423428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.423718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.424014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.424023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.424336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.424662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.424672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.424984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.425305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.425315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.425639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.425943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.425952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.426118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.426401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.426410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.426700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.426991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.427001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.427349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.427669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.427678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.427965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.428279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.428289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.428636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.428929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.428939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.429219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.429542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.429551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.429858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.430201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.430211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.430506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.430803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.430817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.431161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.431458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.431467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.431758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.432058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.432068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.432442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.432764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.432774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.433086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.433423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.433432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.433762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.434060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.100 [2024-05-16 09:48:57.434070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.100 qpair failed and we were unable to recover it. 00:36:04.100 [2024-05-16 09:48:57.434442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.434758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.434768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.435067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.435373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.435382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.435715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.436016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.436025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.436314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.436607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.436616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.436901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.437228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.437237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.437449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.437740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.437750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.438072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.438145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.438155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.438444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.438791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.438800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.439093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.439384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.439393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.439725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.440017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.440026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.440195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.440491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.440500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.440822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.441147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.441156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.441441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.441769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.441778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.442081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.442402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.442411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.442728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.443017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.443026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.443192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.443472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.443481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.443768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.444067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.444077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.444277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.444605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.444614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.444909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.445229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.445239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.445528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.445865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.445874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.446218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.446473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.446482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.446785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.447075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.447084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.447373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.447685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.447694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.447981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.448300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.448309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.448598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.448803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.448812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.449169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.449473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.449482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.449794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.450081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.450090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.450398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.450694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.450702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.450984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.451278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.451287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.101 qpair failed and we were unable to recover it. 00:36:04.101 [2024-05-16 09:48:57.451571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.101 [2024-05-16 09:48:57.451785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.451794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.452071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.452388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.452397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.452677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.453008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.453019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.453318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.453638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.453647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.453947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.454254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.454264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.454555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.454886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.454896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.455208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.455529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.455539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.455809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.456108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.456118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.456399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.456719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.456728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.457060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.457353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.457362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.457653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.457974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.457983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.458314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.458636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.458645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.458968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.459268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.459277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.459560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.459869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.459878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.460199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.460548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.460557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.460725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.461115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.461125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.461402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.461712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.461724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.462030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.462356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.462366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.462647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.462940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.462949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.463262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.463358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.463367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.463652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.463989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.463998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.464274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.464596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.464605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.464789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.465059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.465069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.465419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.465710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.465719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.466008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.466372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.466382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.466686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.466977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.466986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.467272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.467603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.467612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.467926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.468227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.468236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.468440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.468653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.468662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.102 qpair failed and we were unable to recover it. 00:36:04.102 [2024-05-16 09:48:57.468977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.102 [2024-05-16 09:48:57.469284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.469294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.469616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.469830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.469839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.470185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.470479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.470488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.470781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.471103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.471113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.471395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.471685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.471694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.471978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.472270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.472280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.472606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.472889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.472898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.473209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.473526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.473535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.473807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.474106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.474116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.474424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.474727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.474736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.475092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.475383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.475393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.475703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.476023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.476033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.476364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.476672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.476681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.476963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.477263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.477272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.477578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.477872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.477881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.478183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.478506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.478515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.478798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.479003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.479012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.479323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.479530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.479539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.479829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.480119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.480129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.480408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.480704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.480713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.480907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.481205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.481214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.481530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.481828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.481837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.482171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.482505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.482514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.482801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.483099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.483109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.483401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.483692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.483701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.484007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.484326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.484336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.484642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.484994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.485003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.485310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.485602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.485611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.485941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.486272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.486282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.103 [2024-05-16 09:48:57.486569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.486881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.103 [2024-05-16 09:48:57.486890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.103 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.487168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.487508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.487517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.487830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.488160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.488170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.488493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.488796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.488805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.489083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.489368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.489376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.489690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.489982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.489991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.490305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.490625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.490634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.490952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.491114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.491124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.491448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.491775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.491784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.492072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.492370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.492381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.492728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.493027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.493036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.493416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.493742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.493751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.494037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.494377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.494386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.494707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.495027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.495036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.495379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.495667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.495676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.495958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.496254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.496264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.496585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.496879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.496888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.497162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.497568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.497578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.497901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.498182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.498191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.104 qpair failed and we were unable to recover it. 00:36:04.104 [2024-05-16 09:48:57.498499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.104 [2024-05-16 09:48:57.498799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.498809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.499117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.499433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.499442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.499721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.500042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.500060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.500410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.500736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.500745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.501094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.501387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.501396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.501725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.501937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.501947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.502180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.502387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.502396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.502716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.502798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.502807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.502899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.503087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.503098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.503303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.503600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.503609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.503868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.504179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.504188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.504541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.504842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.504851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.505162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.505490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.505500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.505670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.505963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.505972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.506296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.506623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.506632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.506807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.507098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.507107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.507429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.507740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.507749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.508061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.508493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.508502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.508791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.509019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.509029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.509236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.509528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.509537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.509852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.510172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.510181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.510373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.510616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.510625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.510796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.511117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.511127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.511430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.511736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.511745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.512069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.512443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.512452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.512661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.512995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.513004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.513310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.513635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.513644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.513940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.514226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.514236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.105 qpair failed and we were unable to recover it. 00:36:04.105 [2024-05-16 09:48:57.514544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.514957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.105 [2024-05-16 09:48:57.514967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.515166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.515479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.515488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.515770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.516097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.516106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.516417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.516707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.516716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.517031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.517371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.517381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.517706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.517966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.517975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.518295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.518637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.518646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.518816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.518990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.519000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.519141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.519499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.519508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.519810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.520074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.520084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.520279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.520589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.520598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.520763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.521038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.521048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.521334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.521511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.521521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.521838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.522165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.522175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.522489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.522790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.522799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.523122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.523450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.523459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.523776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.524149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.524158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.524472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.524790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.524799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.525132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.525470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.525479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.525690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.525963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.525972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.526297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.526600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.526609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.526987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.527058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.527068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.527374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.527664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.527673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.527977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.528210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.528222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.528425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.528753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.528762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.529063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.529247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.529256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.529460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.529803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.529812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.529991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.530186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.530197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.530490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.530816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.530825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.106 qpair failed and we were unable to recover it. 00:36:04.106 [2024-05-16 09:48:57.530991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.531353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.106 [2024-05-16 09:48:57.531362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.531663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.531976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.531985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.532326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.532617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.532626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.532944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.533239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.533249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.533565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.533857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.533867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.534254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.534567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.534576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.534863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.535067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.535077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.535411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.535716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.535725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.536021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.536366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.536376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.536567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.536908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.536917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.537280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.537574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.537583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.537868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.538093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.538102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.538428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.538749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.538758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.539049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.539388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.539397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.539607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.539922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.539931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.540364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.540670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.540679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.540964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.541279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.541288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.541579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.541900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.541909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.542088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.542363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.542373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.542705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.542886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.542895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.543190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.543540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.543549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.543741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.544092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.544102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.544407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.544638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.544647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.544758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.544987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.544997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.545310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.545591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.545600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.545891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.546164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.546173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.546484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.546664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.546673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.546907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.547176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.547185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.547572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.547865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.107 [2024-05-16 09:48:57.547875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.107 qpair failed and we were unable to recover it. 00:36:04.107 [2024-05-16 09:48:57.548190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.548508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.548517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.548847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.549049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.549062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.549402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.549706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.549715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.550025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.550335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.550345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.550521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.550782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.550791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.550981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.551285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.551295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.551584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.551906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.551915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.552236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.552557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.552567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.552882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.553208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.553217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.553531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.553843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.553852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.554046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.554230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.554240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c54270 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Write completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Write completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Write completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Write completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Write completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Write completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Write completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Write completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Write completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Write completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Write completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Write completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 Read completed with error (sct=0, sc=8) 00:36:04.108 starting I/O failed 00:36:04.108 [2024-05-16 09:48:57.554940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:04.108 [2024-05-16 09:48:57.555354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.555720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.555759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.556327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.556755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.556793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.557325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.557749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.557787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.558288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.558721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.558758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.559125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.559497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.559525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.559885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.560262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.560290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.560646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.560867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.560896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.561250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.561579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.561605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.561938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.562292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.562321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.108 qpair failed and we were unable to recover it. 00:36:04.108 [2024-05-16 09:48:57.562680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.563015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.108 [2024-05-16 09:48:57.563042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.563318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.563671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.563699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.564038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.564401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.564430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.564766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.565005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.565036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.565410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.565758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.565784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.566120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.566444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.566470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.566825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.567160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.567188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.567545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.567876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.567903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.568261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.568630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.568657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.568995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.569335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.569364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.569728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.570089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.570117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.570435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.570787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.570820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.571144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.571467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.571493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.571854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.572217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.572245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.572593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.572942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.572969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.573325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.573687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.573714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.574066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.574407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.574433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.574776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.575138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.575167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.575543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.575878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.575905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.576265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.576614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.576641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.576992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.577328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.577356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.577704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.578047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.578090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.578430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.578762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.578790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.579119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.579458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.109 [2024-05-16 09:48:57.579485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.109 qpair failed and we were unable to recover it. 00:36:04.109 [2024-05-16 09:48:57.579840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.580157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.580185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.580537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.580890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.580917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.581273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.581625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.581652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.581991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.582340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.582367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.582741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.583072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.583100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.583444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.583778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.583804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.584150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.584498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.584524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.584873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.585224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.585257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.585629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.585978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.586004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.586354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.586694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.586720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.587074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.587428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.587454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.587808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.588155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.588184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.588500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.588855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.588882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.589224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.589472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.589499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.589834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.590185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.590212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.590531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.590887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.590913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.591263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.591501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.591528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.591865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.592208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.592237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.592586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.592927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.592954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.593313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.593677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.593704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.594033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.594384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.594411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.594757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.595128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.595156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.595402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.595749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.595776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.110 qpair failed and we were unable to recover it. 00:36:04.110 [2024-05-16 09:48:57.596133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.110 [2024-05-16 09:48:57.596368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.596395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.596745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.597088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.597116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.597450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.597806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.597833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.598075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.598403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.598429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.598793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.599130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.599157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.599514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.599866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.599892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.600264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.600603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.600630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.600964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.601315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.601343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.601680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.601997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.602023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.602284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.602619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.602646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.603013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.603359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.603387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.603731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.604090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.604120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.604471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.604814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.604841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.605067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.605419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.605446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.605683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.606045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.606084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.606424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.606772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.606799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.607143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.607494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.607521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.607883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.608231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.608259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.608598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.608909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.608935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.609283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.609606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.609633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.609989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.610353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.610380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.610734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.611079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.611108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.611468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.611705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.611732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.612105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.612460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.612486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.612837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.613180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.613208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.613565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.613899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.613925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.614272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.614591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.614617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.614964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.615297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.615325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.615694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.616050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.616100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.111 qpair failed and we were unable to recover it. 00:36:04.111 [2024-05-16 09:48:57.616420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.111 [2024-05-16 09:48:57.616730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.616757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.617100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.617445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.617471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.617819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.618164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.618192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.618552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.618897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.618924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.619269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.619621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.619647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.619990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.620330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.620357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.620694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.621051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.621086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.621440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.621793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.621820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.622175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.622512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.622539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.622882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.623216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.623244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.623583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.623928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.623956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.624343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.624592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.624619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.624978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.625285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.625314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.625536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.625863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.625891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.626211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.626572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.626599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.626914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.627177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.627206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.627591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.627924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.627952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.628302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.628620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.628648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.629031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.629387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.629416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.629769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.630122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.630151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.630520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.630896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.630924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.631360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.631695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.631723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.632077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.632434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.632461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.632811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.633165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.633193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.633536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.633867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.633894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.634237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.634580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.634609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.634967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.635191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.635219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.635551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.635874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.635901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.636223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.636587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.636614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.112 qpair failed and we were unable to recover it. 00:36:04.112 [2024-05-16 09:48:57.636960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.112 [2024-05-16 09:48:57.637290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-05-16 09:48:57.637319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-05-16 09:48:57.637671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-05-16 09:48:57.638014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-05-16 09:48:57.638041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-05-16 09:48:57.638462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-05-16 09:48:57.638751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-05-16 09:48:57.638778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-05-16 09:48:57.639143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-05-16 09:48:57.639503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-05-16 09:48:57.639530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-05-16 09:48:57.639856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-05-16 09:48:57.640206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-05-16 09:48:57.640234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-05-16 09:48:57.640601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-05-16 09:48:57.640935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-05-16 09:48:57.640962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.113 [2024-05-16 09:48:57.641291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-05-16 09:48:57.641632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.113 [2024-05-16 09:48:57.641659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.113 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.642008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.642221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.642249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.642603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.642941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.642970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.643304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.643645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.643673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.644007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.644365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.644394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.644744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.645104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.645151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.645509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.645851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.645878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.646297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.646609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.646636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.646975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.647278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.647307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.647678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.648018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.648046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.648433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.648784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.648812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.649158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.649508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.649536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.649869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.650210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.650239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.650585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.650959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.650987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.651334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.651657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.651685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.652036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.652280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.652308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.652666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.653010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.653039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.653385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.653724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.653752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.654100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.654467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.654494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.654870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.655209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.655238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.655570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.655909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.655936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.656293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.656635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.656663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.657045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.657416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.657443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.657787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.658119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.658147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.658478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.658795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.658824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.659190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.659391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.659421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.659787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.660120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.660148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.660504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.660846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.660875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.661222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.661527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.661555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.661931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.662279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.662308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.662651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.662991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.663019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.663370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.663715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.663743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.393 qpair failed and we were unable to recover it. 00:36:04.393 [2024-05-16 09:48:57.663962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.664308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.393 [2024-05-16 09:48:57.664338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.664709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.665061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.665091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.665426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.665767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.665795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.666141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.666486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.666514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.666927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.667263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.667292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.667652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.667993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.668020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.668365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.668687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.668715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.669050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.669358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.669385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.669724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.670068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.670098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.670468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.670811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.670839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.671021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.671364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.671393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.671759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.672021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.672050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.672410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.672753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.672781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.673103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.673465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.673493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.673839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.674143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.674173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.674507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.674855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.674883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.675282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.675614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.675642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.675998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.676311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.676339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.676674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.677018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.677046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.677472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.677803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.677838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.678198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.678534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.678562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.678908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.679223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.679251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.679600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.679944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.679971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.680346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.680683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.680711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.681093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.681492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.681519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.681875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.682167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.682197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.682556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.682911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.682937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.683275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.683623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.683651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.683997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.684242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.684275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.684622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.684964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.684997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.685331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.685678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.685706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.686074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.686450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.686477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.686816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.687171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.687201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.687450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.687780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.687808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.688126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.688490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.688517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.688872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.689221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.689251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.689591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.689928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.689956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.690284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.690627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.690654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.690991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.691306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.691336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.691689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.692026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.692067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.692308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.692636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.692663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.692990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.693208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.693236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.693575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.693963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.693992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.694216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.694554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.694582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.694924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.695245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.695275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.695621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.695966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.695994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.696355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.696698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.696726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.697081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.697424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.697452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.697789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.698175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.698204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.698555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.698898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.698932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.699285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.699627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.699655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.700015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.700330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.700360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.700703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.701047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.701086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.701430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.701785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.701812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.702198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.702538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.702566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.702914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.703256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.703284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.703510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.703864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.394 [2024-05-16 09:48:57.703892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.394 qpair failed and we were unable to recover it. 00:36:04.394 [2024-05-16 09:48:57.704243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.704587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.704615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.704965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.705298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.705326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.705636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.705983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.706012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.706377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.706719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.706748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.707134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.707466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.707494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.707832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.708182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.708211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.708575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.708962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.708989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.709323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.709663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.709691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.710046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.710377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.710405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.710755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.711091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.711121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.711446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.711789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.711817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.712159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.712501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.712529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.712891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.713102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.713133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.713485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.713825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.713853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.714096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.714478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.714507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.714833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.715168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.715198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.715569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.715911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.715938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.716272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.716628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.716655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.716972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.717196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.717224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.717566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.717898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.717925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.718272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.718619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.718646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.719029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.719351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.719380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.719703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.720073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.720103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.720455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.720795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.720822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.721132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.721487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.721515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.721864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.722199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.722227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.722589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.722934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.722963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.723321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.723653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.723681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.723907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.724269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.724299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.724648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.724989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.725018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.725349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.725693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.725721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.726039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.726391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.726419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.726764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.727107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.727136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.727489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.727829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.727857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.728103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.728443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.728471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.728855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.729193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.729221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.729545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.729903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.729931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.730303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.730634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.730662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.731021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.731244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.731274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.731633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.731988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.732016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.732365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.732698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.732725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.733077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.733292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.733323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.733628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.733975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.734003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.395 qpair failed and we were unable to recover it. 00:36:04.395 [2024-05-16 09:48:57.734339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.395 [2024-05-16 09:48:57.734645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.734672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.735009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.735339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.735368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.735673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.736042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.736090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.736418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.736775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.736804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.737082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.737425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.737452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.737768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.738103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.738130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.738480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.738693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.738720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.739100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.739461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.739488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.739724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.740087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.740118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.740438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.740773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.740800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.741158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.741508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.741536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.741891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.742233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.742262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.742631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.742963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.742990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.743389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.743727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.743754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.744095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.744451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.744478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.744835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.745177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.745207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.745550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.745904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.745931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.746272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.746619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.746647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.747000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.747340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.747368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.747707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.748064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.748093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.748436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.748789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.748818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.749242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.749552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.749579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.749927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.750278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.750308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.750564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.750927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.750954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.751307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.751673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.751701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.752065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.752383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.752410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.752781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.753119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.753149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.753518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.753868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.753897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.754147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.754491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.754518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.754854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.755213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.755241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.755513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.755858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.755885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.756208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.756567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.756594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.756931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.757277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.757306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.757647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.757867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.757895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.758225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.758558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.758586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.758828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.759043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.759080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.759437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.759774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.759801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.760027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.760381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.760411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.760734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.761101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.761130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.761474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.761830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.761858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.762236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.762603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.762632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.763003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.763309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.763338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.763561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.763930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.763959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.764291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.764692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.764720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.765050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.765374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.765402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.765748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.766071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.766101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.766461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.766811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.766840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.767198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.767517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.767545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.767890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.768246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.768275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.768638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.768980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.769007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.769372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.769705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.769734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.770063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.770300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.770328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.396 qpair failed and we were unable to recover it. 00:36:04.396 [2024-05-16 09:48:57.770664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.396 [2024-05-16 09:48:57.771011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.771039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.771379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.771735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.771763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.772111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.772481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.772508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.772725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.773108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.773137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.773397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.773734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.773760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.774014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.774363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.774393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.774734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.775050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.775086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.775448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.775801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.775828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.776181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.776539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.776568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.776905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.777274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.777304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.777646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.777938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.777967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.778295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.778629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.778656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.778901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.779219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.779248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.779586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.779941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.779969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.780204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.780494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.780523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.780874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.781105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.781132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.781470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.781682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.781708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.782048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.782309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.782337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.782580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.782801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.782832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.783213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.783539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.783568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.783911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.784114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.784145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.784544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.784780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.784809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.785123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.785470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.785499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.785858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.786175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.786204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.786552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.786896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.786924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.787273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.787651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.787678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.788024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.788244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.788273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.788605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.788959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.788988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.789337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.789683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.789718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.790046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.790389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.790419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.790782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.790992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.791018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.791386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.791703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.791731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.792080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.792417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.792445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.792798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.793144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.793173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.793515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.793862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.793890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.794244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.794596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.794624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.794854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.795185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.795215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.795532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.795850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.795878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.796216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.796583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.796618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.796837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.797216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.797245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.797598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.797943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.797972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.798300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.798634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.798663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.798997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.799235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.799264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.799592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.799893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.799921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.800291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.800643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.800671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.801037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.801373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.801402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.801752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.802129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.802159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.802513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.802846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.802874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.803121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.803469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.803502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.803729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.804071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.804100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.804441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.804660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.804688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.805064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.805400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.805429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.805757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.806105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.806136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.806506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.806850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.806878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.807227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.807582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.807611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.807978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.808318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.808348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.397 qpair failed and we were unable to recover it. 00:36:04.397 [2024-05-16 09:48:57.808576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.808918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.397 [2024-05-16 09:48:57.808947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.809300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.809649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.809679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.810016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.810236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.810272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.810628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.810863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.810892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.811320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.811649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.811677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.812027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.812374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.812404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.812744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.813075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.813103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.813445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.813791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.813820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.814178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.814400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.814430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.814747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.815094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.815123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.815459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.815770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.815798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.816192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.816530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.816558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.816885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.817224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.817254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.817625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.817959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.817986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.818326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.818677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.818707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.818926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.819193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.819223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.819572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.819917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.819945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.820282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.820624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.820652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.820856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.821177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.821206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.821577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.821912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.821940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.822301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.822642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.822670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.822991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.823331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.823362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.823704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.824040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.824078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.824401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.824747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.824775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.825118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.825514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.825543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.825891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.826223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.826253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.826608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.826960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.826990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.827342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.827684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.827713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.828064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.828396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.828424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.828650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.829020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.829049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.829385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.829732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.829761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.830111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.830427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.830455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.830815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.831126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.831157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.831518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.831837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.831865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.832090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.832397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.832426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.832790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.833124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.833153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.833501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.833847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.833875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.834241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.834554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.834582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.834920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.835251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.835281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.835593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.835822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.835853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.836181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.836392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.836423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.836665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.837024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.837062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.837406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.837634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.837665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.838027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.838371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.838400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.838684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.838984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.839012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.839251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.839499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.839527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.839860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.840221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.840251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.840469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.840784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.840813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.841171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.841510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.841537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.841853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.842195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.842224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.842449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.842807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.398 [2024-05-16 09:48:57.842835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.398 qpair failed and we were unable to recover it. 00:36:04.398 [2024-05-16 09:48:57.843050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.843469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.843496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.843817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.844179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.844209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.844566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.844903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.844932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.845287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.845631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.845659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.845975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.846322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.846351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.846709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.847041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.847081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.847320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.847669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.847697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.848011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.848399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.848428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.848779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.849113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.849143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.849501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.849860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.849888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.850227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.850585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.850613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.851032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.851377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.851407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.851777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.852110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.852141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.852468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.852824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.852852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.853195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.853569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.853598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.853922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.854285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.854315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.854675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.855016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.855044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.855395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.855740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.855769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.856128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.856498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.856527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.856873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.857149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.857180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.857533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.857888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.857916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.858258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.858609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.858637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.858945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.859294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.859325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.859632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.859990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.860018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.860351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.860706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.860734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.861093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.861476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.861504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.861857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.862197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.862227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.862574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.862921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.862950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.863264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.863581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.863611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.863988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.864367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.864397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.864742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.865088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.865118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.865449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.865797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.865826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.866141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.866363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.866391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.866733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.867084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.867114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.867444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.867761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.867790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.868170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.868509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.868538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.868898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.869236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.869266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.869616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.869960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.869989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.870311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.870679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.870710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.871049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.871421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.871450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.871863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.872073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.872104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.872474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.872806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.872835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.873159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.873483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.873511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.873840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.874199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.874228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.874588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.874940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.874968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.875299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.875512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.875541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.875860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.876210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.876240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.876584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.876933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.876962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.877204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.877554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.877583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.877945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.878262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.878291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.878662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.879005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.879034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.879380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.879728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.879756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.880112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.880511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.880540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.880887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.881225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.881255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.881609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.881953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.881984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.882225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.882610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.399 [2024-05-16 09:48:57.882640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.399 qpair failed and we were unable to recover it. 00:36:04.399 [2024-05-16 09:48:57.883004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.883382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.883413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.883754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.884094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.884125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.884511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.884848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.884878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.885227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.885580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.885609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.885939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.886300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.886329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.886686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.887015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.887044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.887287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.887671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.887700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.887980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.888199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.888228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.888581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.888928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.888956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.889300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.889694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.889723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.890064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.890373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.890402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.890750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.891039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.891086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.891475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.891812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.891841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.892193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.892612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.892642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.892998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.893370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.893400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.893741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.894082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.894111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.894471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.894694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.894725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.895090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.895437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.895467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.895813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.896126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.896156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.896509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.896855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.896884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.897248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.897599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.897628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.897973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.898327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.898358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.898699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.899046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.899099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.899434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.899779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.899808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.900071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.900429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.900458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.900808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.901166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.901196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.901551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.901901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.901935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.902306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.902689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.902718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.903075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.903428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.903458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.903809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.904159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.904189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.904542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.904898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.904927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.905280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.905632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.905661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.905983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.906350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.906379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.906726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.907078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.907108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.907460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.907699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.907730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.908079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.908423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.908451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.908819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.909171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.909207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.909516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.909784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.909812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.910171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.910518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.910547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.910881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.911228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.911258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.911535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.911824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.911853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.912201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.912531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.912559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.912924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.913237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.913268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.913637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.913986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.914014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.914379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.914735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.914764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.915116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.915501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.915531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.915849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.916197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.916232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.916580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.916809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.916838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.917208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.917569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.917597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.917960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.918330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.918359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.918716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.919066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.400 [2024-05-16 09:48:57.919097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.400 qpair failed and we were unable to recover it. 00:36:04.400 [2024-05-16 09:48:57.919452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.919795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.919825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.920193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.920541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.920571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.920927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.921274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.921304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.921664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.922015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.922045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.922373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.922695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.922723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.923077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.923427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.923455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.923689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.924046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.924087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.924457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.924834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.924864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.925216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.925557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.925587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.925958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.926320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.926349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.926691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.927069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.927101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.927424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.927820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.927849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.928190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.928564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.928592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.928952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.929305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.929336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.929680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.930028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.930066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.930393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.930743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.930771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.931006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.931409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.931440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.931794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.932153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.932184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.932538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.932898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.932926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.933278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.933631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.933660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.933991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.934237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.934269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.934619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.934886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.934915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.935287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.935640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.935668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.935908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.936236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.936266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.936620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.936986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.937014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.937382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.937733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.401 [2024-05-16 09:48:57.937762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.401 qpair failed and we were unable to recover it. 00:36:04.401 [2024-05-16 09:48:57.938102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.938488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.938520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.684 qpair failed and we were unable to recover it. 00:36:04.684 [2024-05-16 09:48:57.938867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.939213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.939244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.684 qpair failed and we were unable to recover it. 00:36:04.684 [2024-05-16 09:48:57.939531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.939880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.939909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.684 qpair failed and we were unable to recover it. 00:36:04.684 [2024-05-16 09:48:57.940253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.940605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.940635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.684 qpair failed and we were unable to recover it. 00:36:04.684 [2024-05-16 09:48:57.940928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.941284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.941314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.684 qpair failed and we were unable to recover it. 00:36:04.684 [2024-05-16 09:48:57.941666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.942016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.942046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.684 qpair failed and we were unable to recover it. 00:36:04.684 [2024-05-16 09:48:57.942382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.942730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.942758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.684 qpair failed and we were unable to recover it. 00:36:04.684 [2024-05-16 09:48:57.943099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.943471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.943501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.684 qpair failed and we were unable to recover it. 00:36:04.684 [2024-05-16 09:48:57.943855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.944197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.944226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.684 qpair failed and we were unable to recover it. 00:36:04.684 [2024-05-16 09:48:57.944598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.944985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.945014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.684 qpair failed and we were unable to recover it. 00:36:04.684 [2024-05-16 09:48:57.945406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.945771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.945799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.684 qpair failed and we were unable to recover it. 00:36:04.684 [2024-05-16 09:48:57.946136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.946508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.946537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.684 qpair failed and we were unable to recover it. 00:36:04.684 [2024-05-16 09:48:57.946887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.947234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.947264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.684 qpair failed and we were unable to recover it. 00:36:04.684 [2024-05-16 09:48:57.947635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.947989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.948018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.684 qpair failed and we were unable to recover it. 00:36:04.684 [2024-05-16 09:48:57.948349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.948743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.948771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.684 qpair failed and we were unable to recover it. 00:36:04.684 [2024-05-16 09:48:57.949134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.949490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.949519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.684 qpair failed and we were unable to recover it. 00:36:04.684 [2024-05-16 09:48:57.949869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.950222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.684 [2024-05-16 09:48:57.950252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.684 qpair failed and we were unable to recover it. 00:36:04.684 [2024-05-16 09:48:57.950636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.950983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.951013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.951415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.951774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.951803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.952166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.952552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.952581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.952915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.953266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.953297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.953521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.953854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.953883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.954239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.954583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.954611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.954980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.955328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.955358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.955711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.956065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.956097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.956412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.956657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.956688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.957030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.957426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.957456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.957812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.958148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.958180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.958544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.958899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.958927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.959233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.959586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.959615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.959970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.960318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.960347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.960546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.960855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.960885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.961235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.961596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.961624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.961987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.962355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.962386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.962737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.963092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.963124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.963484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.963846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.963874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.964228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.964591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.964619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.964976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.965320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.965350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.965698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.966066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.966097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.966518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.966831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.966860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.967227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.967610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.967638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.967997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.968330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.968360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.968712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.969076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.969106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.969499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.969848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.969876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.970241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.970564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.970591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.970959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.971321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.971352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.685 qpair failed and we were unable to recover it. 00:36:04.685 [2024-05-16 09:48:57.971738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.972098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.685 [2024-05-16 09:48:57.972128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.972498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.972847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.972876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.973097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.973484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.973514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.973879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.974232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.974262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.974514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.974876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.974906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.975242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.975601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.975631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.975979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.976205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.976236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.976602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.976990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.977021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.977359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.977711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.977741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.978095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.978464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.978494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.978838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.979191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.979223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.979598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.979824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.979856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.980210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.980578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.980608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.980961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.981189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.981218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.981548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.981918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.981949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.982287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.982639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.982669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.983029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.983396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.983426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.983789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.984158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.984189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.984542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.984895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.984924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.985284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.985632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.985662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.986017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.986394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.986425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.986790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.987127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.987157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.987514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.987872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.987902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.988274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.988637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.988666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.989024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.989391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.989423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 538578 Killed "${NVMF_APP[@]}" "$@" 00:36:04.686 [2024-05-16 09:48:57.989819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.990175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.990205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 [2024-05-16 09:48:57.990454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 09:48:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:04.686 [2024-05-16 09:48:57.990771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.990801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 09:48:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:04.686 [2024-05-16 09:48:57.991160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 09:48:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:04.686 09:48:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:04.686 [2024-05-16 09:48:57.991537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.991567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.686 qpair failed and we were unable to recover it. 00:36:04.686 09:48:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.686 [2024-05-16 09:48:57.991812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.686 [2024-05-16 09:48:57.992155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.992185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:57.992530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.992855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.992885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:57.993295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.993624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.993655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:57.994028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.994361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.994391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:57.994643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.995042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.995087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:57.995365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.995712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.995740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:57.996107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.996456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.996484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:57.996766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.997127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.997155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:57.997533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.997894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.997924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:57.998167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.998517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.998549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:57.998920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.999302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:57.999333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 09:48:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=539614 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 09:48:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 539614 00:36:04.687 [2024-05-16 09:48:57.999712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 09:48:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 539614 ']' 00:36:04.687 09:48:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:04.687 [2024-05-16 09:48:58.000078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.000120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 09:48:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:04.687 [2024-05-16 09:48:58.000488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 09:48:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:04.687 09:48:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:04.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:04.687 [2024-05-16 09:48:58.000847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.000878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 09:48:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:04.687 09:48:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:04.687 [2024-05-16 09:48:58.001297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.001622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.001653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:58.001905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.002123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.002156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:58.002451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.002808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.002839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:58.003084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.003467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.003497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:58.003812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.004138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.004173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:58.004528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.004907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.004936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:58.005291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.005529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.005560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:58.005908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.006046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.006088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:58.006497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.006747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.006778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:58.007170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.007547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.007579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:58.007841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.008207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.008238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:58.008607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.008820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.008855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:58.009134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.009519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.009550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.687 qpair failed and we were unable to recover it. 00:36:04.687 [2024-05-16 09:48:58.009903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.010282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.687 [2024-05-16 09:48:58.010313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.010687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.011080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.011112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.011488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.011849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.011879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.012223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.012614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.012643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.013001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.013321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.013353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.013721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.014081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.014112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.014477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.014841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.014870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.015212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.015570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.015598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.015822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.016173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.016204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.016606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.016842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.016872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.017228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.017473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.017500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.017899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.018132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.018161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.018499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.018856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.018886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.019231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.019595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.019624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.019875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.020217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.020251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.020596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.020961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.020993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.021176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.021425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.021454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.021885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.022189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.022221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.022615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.022967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.022999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.023414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.023782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.023812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.024181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.024575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.024604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.024848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.025206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.025238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.025620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.025855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.025883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.026141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.026537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.026566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.688 qpair failed and we were unable to recover it. 00:36:04.688 [2024-05-16 09:48:58.026810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.027129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.688 [2024-05-16 09:48:58.027160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.027524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.027757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.027786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.028140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.028513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.028541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.028908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.029250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.029281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.029542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.029889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.029919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.030293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.030656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.030684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.031043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.031494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.031524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.031885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.032234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.032266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.032628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.032987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.033017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.033404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.033734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.033765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.034125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.034567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.034596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.034954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.035197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.035234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.035596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.035958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.035989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.036306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.036663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.036694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.037147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.037502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.037532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.037777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.038125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.038158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.038537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.038927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.038956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.039223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.039427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.039457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.039733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.040099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.040130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.040362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.040679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.040708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.041090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.041440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.041473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.041843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.042205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.042242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.042506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.042831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.042861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.043202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.043596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.043625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.043977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.044233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.044262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.044637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.044983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.045012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.045348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.045727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.045758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.046092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.046489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.046521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.046881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.047209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.047238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.689 [2024-05-16 09:48:58.047478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.047744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.689 [2024-05-16 09:48:58.047774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.689 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.048186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.048559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.048589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.048930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.049300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.049336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.049699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.049968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.049996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.050371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.050738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.050768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.051121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.051496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.051525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.051899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.052285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.052316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.052672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.053030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.053081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.053345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.053696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.053724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.054096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.054380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.054408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.054758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.055127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.055158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.055539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.055906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.055935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.056299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.056642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.056678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.057065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.057290] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:36:04.690 [2024-05-16 09:48:58.057345] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:04.690 [2024-05-16 09:48:58.057432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.057466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.057841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.058178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.058209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.058570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.058933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.058963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.059319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.059673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.059703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.059924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.060172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.060204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.060593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.060821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.060856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.061154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.061541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.061571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.061943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.062188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.062220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.062575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.062935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.062966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.063316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.063648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.063679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.063910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.064262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.064294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.064653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.065006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.065037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.065427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.065801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.065833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.066176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.066542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.066571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.066929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.067291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.067322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.690 [2024-05-16 09:48:58.067689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.068049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.690 [2024-05-16 09:48:58.068095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.690 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.068463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.068819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.068848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.069198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.069559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.069590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.069948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.070349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.070380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.070746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.071125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.071156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.071517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.071856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.071886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.072190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.072580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.072610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.072872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.073102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.073137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.073539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.073855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.073885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.074273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.074626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.074655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.074905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.075295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.075325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.075567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.075917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.075946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.076187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.076546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.076576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.076948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.077314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.077344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.077716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.078048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.078092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.078462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.078822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.078852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.079179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.079549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.079579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.079920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.080298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.080328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.080592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.080938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.080967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.081344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.081708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.081738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.082121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.082481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.082510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.082871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.083229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.083260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.083641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.084003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.084032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.084433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.084685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.084713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.084957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.085171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.085203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.085581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.085936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.085964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.086342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.086740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.086769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.087121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.087491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.087520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.087892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.088238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.088268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.088626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.088977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.089006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.691 qpair failed and we were unable to recover it. 00:36:04.691 [2024-05-16 09:48:58.089383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.691 [2024-05-16 09:48:58.089737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.089767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.090120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.090449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.090479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.090817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.091173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.091204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.091566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.091933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.091961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.092301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.092656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.092685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.092927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.093283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.093313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 EAL: No free 2048 kB hugepages reported on node 1 00:36:04.692 [2024-05-16 09:48:58.093685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.093890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.093919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.094313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.094640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.094669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.095044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.095430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.095459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.095701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.096028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.096068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.096283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.096595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.096625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.096962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.097196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.097230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.097603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.097844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.097875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.098330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.098686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.098716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.099083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.099442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.099472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.099827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.100182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.100213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.100489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.100839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.100868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.101236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.101593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.101625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.101873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.102112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.102144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.102510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.102865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.102894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.103239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.103600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.103629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.103995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.104382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.104412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.104787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.105138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.105168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.105545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.105905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.105935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.106354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.106743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.106773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.107127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.107501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.107530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.107900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.108236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.108266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.108634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.108997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.109026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.109420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.109779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.109809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.692 qpair failed and we were unable to recover it. 00:36:04.692 [2024-05-16 09:48:58.110179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.110531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.692 [2024-05-16 09:48:58.110560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.693 qpair failed and we were unable to recover it. 00:36:04.693 [2024-05-16 09:48:58.110934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.111266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.111297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.693 qpair failed and we were unable to recover it. 00:36:04.693 [2024-05-16 09:48:58.111627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.111999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.112029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.693 qpair failed and we were unable to recover it. 00:36:04.693 [2024-05-16 09:48:58.112304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.112671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.112700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.693 qpair failed and we were unable to recover it. 00:36:04.693 [2024-05-16 09:48:58.113063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.113419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.113450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.693 qpair failed and we were unable to recover it. 00:36:04.693 [2024-05-16 09:48:58.113805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.114165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.114197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.693 qpair failed and we were unable to recover it. 00:36:04.693 [2024-05-16 09:48:58.114555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.114912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.114942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.693 qpair failed and we were unable to recover it. 00:36:04.693 [2024-05-16 09:48:58.115296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.115648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.115678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.693 qpair failed and we were unable to recover it. 00:36:04.693 [2024-05-16 09:48:58.116048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.116389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.116418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.693 qpair failed and we were unable to recover it. 00:36:04.693 [2024-05-16 09:48:58.116768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.117019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.117047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.693 qpair failed and we were unable to recover it. 00:36:04.693 [2024-05-16 09:48:58.117385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.117741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.117771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.693 qpair failed and we were unable to recover it. 00:36:04.693 [2024-05-16 09:48:58.118147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.118508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.118538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.693 qpair failed and we were unable to recover it. 00:36:04.693 [2024-05-16 09:48:58.118888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.119233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.119265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.693 qpair failed and we were unable to recover it. 00:36:04.693 [2024-05-16 09:48:58.119523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.119865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.119896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.693 qpair failed and we were unable to recover it. 00:36:04.693 [2024-05-16 09:48:58.120087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.120476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.120506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.693 qpair failed and we were unable to recover it. 00:36:04.693 [2024-05-16 09:48:58.120872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.121224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.121256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.693 qpair failed and we were unable to recover it. 00:36:04.693 [2024-05-16 09:48:58.121613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.121976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.122006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.693 qpair failed and we were unable to recover it. 00:36:04.693 [2024-05-16 09:48:58.122386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.122753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.693 [2024-05-16 09:48:58.122782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.693 qpair failed and we were unable to recover it. 00:36:04.693 [2024-05-16 09:48:58.123149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.123509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.123539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.123906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.124225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.124256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.124621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.124975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.125004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.125373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.125732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.125761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.126121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.126494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.126524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.126894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.127233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.127266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.127628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.127983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.128013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.128375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.128766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.128797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.129177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.129532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.129563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.129932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.130295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.130326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.130691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.131066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.131099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.131471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.131795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.131826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.132207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.132567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.132598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.132970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.133349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.133379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.133746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.134077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.134107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.134397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.134798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.134827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.135187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.135412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.135441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.135826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.136032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.136074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.136455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.136809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.136842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.137177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.137539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.137569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.137943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.138286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.138317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.138685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.139040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.139082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.139464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.139820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.139851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.140175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.140582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.140611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.140968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.141339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.141371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.141692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.141890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.141919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.142285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.142653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.142683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.143050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.143403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.143439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.143787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.144141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.144173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.694 qpair failed and we were unable to recover it. 00:36:04.694 [2024-05-16 09:48:58.144488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.144832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.694 [2024-05-16 09:48:58.144861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.145228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.145590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.145621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.145951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.146303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.146335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.146705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.147107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.147137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.147370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.147451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:04.695 [2024-05-16 09:48:58.147735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.147765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.148143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.148488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.148519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.148877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.149234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.149266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.149518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.149914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.149944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.150310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.150528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.150559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.150895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.151234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.151265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.151612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.151960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.151991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.152232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.152629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.152657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.153016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.153350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.153381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.153741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.154090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.154121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.154494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.154849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.154878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.155235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.155596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.155626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.155999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.156240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.156270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.156626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.156959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.156989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.157375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.157737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.157769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.158128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.158496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.158526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.158903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.159279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.159309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.159530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.159917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.159946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.160309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.160694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.160724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.161089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.161310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.161337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.695 qpair failed and we were unable to recover it. 00:36:04.695 [2024-05-16 09:48:58.161578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.695 [2024-05-16 09:48:58.161943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.161972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.162350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.162779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.162810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.163172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.163529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.163559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.163919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.164295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.164325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.164704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.164982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.165012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.165438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.165870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.165901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.166245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.166602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.166631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.166987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.167347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.167380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.167745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.168109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.168139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.168367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.168723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.168753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.169144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.169381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.169408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.169785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.170133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.170163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.170543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.170943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.170972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.171314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.171644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.171675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.172037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.172455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.172491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.172844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.173188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.173218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.173594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.173921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.173955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.174317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.174664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.174694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.174957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.175298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.175330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.175561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.175935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.175967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.176318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.176685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.176716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.177080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.177455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.177484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.177861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.178217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.178248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.178615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.178954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.178984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.179367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.179600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.179635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.179993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.180397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.180429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.180797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.181140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.181171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.181549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.181902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.181931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.696 qpair failed and we were unable to recover it. 00:36:04.696 [2024-05-16 09:48:58.182160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.696 [2024-05-16 09:48:58.182536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.182565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.182921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.183266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.183296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.183673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.184004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.184035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.184433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.184788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.184817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.185186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.185565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.185594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.185958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.186389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.186421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.186779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.187133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.187170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.187532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.187884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.187914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.188278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.188640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.188668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.189029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.189377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.189407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.189782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.190002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.190029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.190384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.190606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.190634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.190965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.191351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.191383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.191737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.192101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.192132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.192504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.192869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.192898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.193276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.193633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.193664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.194020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.194386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.194417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.194655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.195024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.195076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.195341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.195713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.195746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.196109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.196467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.196501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.196836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.197188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.197221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.197570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.197829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.197859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.198196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.198562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.198595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.198838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.199203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.199237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.199605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.199875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.199907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.200181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.200534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.200565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.200932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.201294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.201325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.697 qpair failed and we were unable to recover it. 00:36:04.697 [2024-05-16 09:48:58.201684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.697 [2024-05-16 09:48:58.201925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.201956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.202260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.202621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.202650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.202881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.203238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.203270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.203640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.203998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.204028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.204408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.204799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.204832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.205199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.205574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.205603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.205952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.206328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.206359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.206747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.207149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.207180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.207572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.207928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.207957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.208304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.208662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.208692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.208930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.209297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.209330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.209681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.210066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.210098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.210448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.210816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.210845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.211218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.211581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.211610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.211976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.212336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.212367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.212734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.213090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.213121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.213501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.213859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.213888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.214235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.214461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.214492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.214836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.215235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.215268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.215627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.215982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.216013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.216406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.216794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.216824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.217191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.217510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.217541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.217904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.218258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.218291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.218653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.218983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.219012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.698 qpair failed and we were unable to recover it. 00:36:04.698 [2024-05-16 09:48:58.219353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.219582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.698 [2024-05-16 09:48:58.219612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.219985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.220345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.220377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.220738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.220975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.221006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.221375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.221738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.221768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.221993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.222340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.222371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.222744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.223138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.223169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.223533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.223892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.223922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.224294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.224507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.224537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.224893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.225234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.225266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.225644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.226032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.226074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.226432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.226816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.226847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.227080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.227477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.227507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.227845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.228198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.228229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.228608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.228967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.228996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.229377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.229734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.229765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.230128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.230474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.230505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.230864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.231202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.231235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.231643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.231995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.232025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.232274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.232520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.232551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.232869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.233220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.233251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.233616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.233849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.233881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.699 [2024-05-16 09:48:58.234254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.234467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.699 [2024-05-16 09:48:58.234496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.699 qpair failed and we were unable to recover it. 00:36:04.981 [2024-05-16 09:48:58.234909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.235133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.235164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-05-16 09:48:58.235554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.235774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.235804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-05-16 09:48:58.236161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.236519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.236551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-05-16 09:48:58.236930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.237289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.237319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-05-16 09:48:58.237681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.238021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.238050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-05-16 09:48:58.238415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.238781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.238812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-05-16 09:48:58.239178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.239537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.239566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-05-16 09:48:58.239941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.240286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.240316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-05-16 09:48:58.240680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.241045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.241087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-05-16 09:48:58.241418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.241781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.241809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-05-16 09:48:58.242175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.242584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.242615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.981 qpair failed and we were unable to recover it. 00:36:04.981 [2024-05-16 09:48:58.242976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.981 [2024-05-16 09:48:58.243331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.243362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.243720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.244079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.244110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.244473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.244494] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:04.982 [2024-05-16 09:48:58.244544] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:04.982 [2024-05-16 09:48:58.244552] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:04.982 [2024-05-16 09:48:58.244564] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:04.982 [2024-05-16 09:48:58.244570] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:04.982 [2024-05-16 09:48:58.244738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:36:04.982 [2024-05-16 09:48:58.244867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.244895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.244942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:36:04.982 [2024-05-16 09:48:58.245136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:36:04.982 [2024-05-16 09:48:58.245247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.245349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:36:04.982 [2024-05-16 09:48:58.245498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.245525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.245801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.246160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.246190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.246567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.246927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.246956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.247308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.247665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.247693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.248111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.248377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.248405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.248801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.249147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.249178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.249443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.249835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.249864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.250265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.250638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.250668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.251033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.251382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.251413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.251676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.251960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.251988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.252364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.252608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.252636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.253036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.253399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.253430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.253667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.253962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.253991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.254335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.254574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.254604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.254961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.255245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.255274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.255524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.255772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.255802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.256077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.256436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.256466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.256856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.257224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.257257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.257660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.258016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.258045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.258320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.258705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.258735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.258988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.259281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.259311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.259679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.260049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.260092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.982 qpair failed and we were unable to recover it. 00:36:04.982 [2024-05-16 09:48:58.260483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.982 [2024-05-16 09:48:58.260721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.260749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.261094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.261462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.261492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.261868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.262236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.262267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.262656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.263021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.263051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.263410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.263658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.263686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.264026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.264448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.264478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.264834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.265157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.265188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.265572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.265902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.265931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.266216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.266599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.266629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.266872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.267127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.267156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.267613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.267943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.267971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.268321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.268547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.268574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.268933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.269282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.269314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.269691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.270049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.270092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.270491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.270890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.270920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.271331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.271744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.271775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.272176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.272537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.272567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.272951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.273313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.273343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.273580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.273813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.273843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.274240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.274606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.274637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.274995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.275375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.275406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.275790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.276158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.276187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.276559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.276932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.276961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.277335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.277691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.277722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.278161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.278520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.278549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.278930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.279049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.279089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.983 [2024-05-16 09:48:58.279470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.279834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.983 [2024-05-16 09:48:58.279863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.983 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.280277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.280609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.280638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.281005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.281338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.281368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.281621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.281751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.281783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.282032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.282408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.282438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.282857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.283200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.283231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.283597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.283957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.283987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.284370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.284736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.284765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.285030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.285435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.285467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.285722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.286127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.286157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.286542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.286756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.286784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.287046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.287462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.287493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.287724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.288132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.288165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.288535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.288755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.288786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.289156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.289564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.289595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.289837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.290097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.290127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.290483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.290700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.290729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.291099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.291383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.291411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.291787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.292162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.292193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.292583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.292809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.292837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.293227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.293495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.293524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.293973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.294365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.294395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.294768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.295130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.295161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.295543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.295757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.295786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.296152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.296506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.296536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.296801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.297159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.297190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.297550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.297964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.297994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.298360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.298731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.298762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.299178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.299425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.299454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.299585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.299928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.984 [2024-05-16 09:48:58.299958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.984 qpair failed and we were unable to recover it. 00:36:04.984 [2024-05-16 09:48:58.300087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.300367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.300397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.300615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.300738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.300769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.300993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.301320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.301351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.301597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.301967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.301997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.302252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.302482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.302512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.302749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.303075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.303106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.303467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.303682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.303719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.304130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.304489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.304517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.304869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.305104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.305139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.305396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.305610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.305640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.305995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.306241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.306274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.306506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.306854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.306885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.307255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.307633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.307663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.307880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.308093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.308123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.308526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.308876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.308905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.309294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.309654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.309684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.310063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.310266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.310295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.310598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.310949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.310979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.311337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.311710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.311739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.312087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.312450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.312479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.312926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.313280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.313311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.313701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.314065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.314096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.314444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.314669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.314696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.315030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.315385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.315416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.315784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.316160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.316199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.985 qpair failed and we were unable to recover it. 00:36:04.985 [2024-05-16 09:48:58.316565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.985 [2024-05-16 09:48:58.316930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.316958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.317173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.317480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.317510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.317892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.318272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.318302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.318546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.318914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.318943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.319298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.319706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.319736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.320092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.320345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.320377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.320732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.321108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.321139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.321387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.321508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.321537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.321873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.322148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.322179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.322547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.322937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.322966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.323322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.323677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.323707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.323967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.324296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.324328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.324703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.325001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.325031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.325443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.325655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.325685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.326066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.326379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.326409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.326586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.326929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.326964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.327222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.327345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.327374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.327730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.328088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.328119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.328356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.328715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.328744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.329110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.329328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.329357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.329571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.329797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.329825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.330162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.330393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.330420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.330779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.331019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.331046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.986 qpair failed and we were unable to recover it. 00:36:04.986 [2024-05-16 09:48:58.331416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.331829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.986 [2024-05-16 09:48:58.331858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.332220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.332582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.332612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.332972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.333285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.333322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.333702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.334062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.334093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.334443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.334801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.334830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.335188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.335551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.335580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.335959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.336303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.336334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.336698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.337068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.337101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.337435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.337767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.337796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.338157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.338527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.338557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.338918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.339139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.339168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.339525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.339858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.339886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.340341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.340694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.340729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.341097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.341344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.341372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.341725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.342106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.342136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.342512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.342841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.342871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.343105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.343418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.343447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.343682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.344024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.344072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.344334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.344710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.344740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.345151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.345366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.345393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.345584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.346005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.346033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.346394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.346752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.346781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.346981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.347186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.347222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.347619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.347845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.347873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.348205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.348437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.348467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.348678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.348910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.348939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.349310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.349675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.349705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.350085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.350441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.350470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.350831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.351191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.351220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.351608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.352003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.987 [2024-05-16 09:48:58.352032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.987 qpair failed and we were unable to recover it. 00:36:04.987 [2024-05-16 09:48:58.352395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.352754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.352782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.353167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.353524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.353553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.353999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.354324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.354355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.354734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.355094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.355124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.355485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.355687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.355714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.355962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.356219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.356249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.356616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.356861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.356889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.357137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.357488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.357516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.357907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.358237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.358267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.358501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.358856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.358886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.359246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.359467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.359499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.359834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.360212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.360242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.360587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.360944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.360973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.361374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.361722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.361751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.362114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.362477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.362507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.362890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.363123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.363154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.363520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.363734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.363763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.363861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.364088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.364119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.364511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.364740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.364768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.365145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.365514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.365544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.365910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.366256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.366286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.366666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.366999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.367029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.367410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.367764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.367794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.368188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.368587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.368616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.368995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.369402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.369434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.369815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.370172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.370203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.370437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.370821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.370852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.371227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.371607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.371637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.372043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.372442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.372473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.988 qpair failed and we were unable to recover it. 00:36:04.988 [2024-05-16 09:48:58.372878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.988 [2024-05-16 09:48:58.373232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.373264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.373632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.373995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.374026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.374410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.374774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.374803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.375175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.375537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.375566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.375916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.376262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.376293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.376667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.377047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.377090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.377305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.377553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.377583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.377943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.378319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.378351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.378715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.378928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.378956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.379321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.379686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.379716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.379932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.380021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.380048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.380384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.380586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.380616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.380846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.381213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.381243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.381568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.381957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.381986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.382119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.382520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.382550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.382904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.383240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.383270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.383663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.383896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.383925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.384151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.384388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.384417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.384807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.385137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.385167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.385401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.385759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.385789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.386165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.386552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.386581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.386955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.387323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.387353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.387722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.388081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.388113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.388485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.388851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.388881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.389244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.389463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.389492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.389851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.390221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.390253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.390498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.390716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.390744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.390902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.391113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.391145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.391372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.391739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.391768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.989 [2024-05-16 09:48:58.391978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.392166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.989 [2024-05-16 09:48:58.392197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.989 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.392411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.392785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.392813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.393174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.393424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.393454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.393708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.394080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.394112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.394469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.394670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.394699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.395038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.395423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.395453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.395669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.395893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.395922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.396288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.396512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.396542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.396887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.397128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.397158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.397506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.397912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.397941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.398311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.398668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.398698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.399031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.399397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.399428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.399791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.400011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.400043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.400436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.400663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.400694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.401064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.401444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.401474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.401853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.402092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.402123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.402359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.402715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.402745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.403122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.403496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.403525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.403938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.404151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.404182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.404519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.404921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.404952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.405308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.405675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.405706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.406080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.406420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.406449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.406841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.407208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.407238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.407632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.408009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.408039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.408389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.408756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.408786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.990 qpair failed and we were unable to recover it. 00:36:04.990 [2024-05-16 09:48:58.409010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.990 [2024-05-16 09:48:58.409395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.409426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.409644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.410014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.410043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.410420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.410772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.410802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.411164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.411524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.411553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.411920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.412153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.412184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.412537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.412737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.412764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.413117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.413347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.413375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.413724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.413960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.413987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.414223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.414596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.414626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.414987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.415369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.415400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.415833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.416145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.416175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.416533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.416777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.416808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.417169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.417487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.417517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.417874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.418238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.418270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.418669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.419031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.419074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.419427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.419782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.419811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.420192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.420559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.420589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.421008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.421243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.421275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.421608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.421972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.422000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.422381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.422633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.422664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.423023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.423368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.423399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.423807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.424152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.424183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.424573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.424934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.424964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.425353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.425568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.425599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.425937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.426259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.426290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.426681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.427045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.427085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.427338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.427566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.427594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.427991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.428341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.428373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.428585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.428946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.428975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.991 [2024-05-16 09:48:58.429329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.429659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.991 [2024-05-16 09:48:58.429691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.991 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.430088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.430409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.430448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.430677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.431072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.431104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.431481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.431687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.431716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.432115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.432484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.432513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.432889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.433244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.433274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.433659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.433990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.434022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.434407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.434651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.434681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.435046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.435291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.435322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.435665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.436023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.436066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.436275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.436592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.436620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.437019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.437416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.437453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.437666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.437976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.438005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.438223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.438548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.438577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.438793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.439156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.439187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.439551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.439787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.439814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.440168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.440570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.440599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.440842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.441041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.441082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.441438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.441781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.441811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.442177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.442538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.442567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.442946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.443319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.443350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.443569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.443927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.443960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.444329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.444692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.444720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.445103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.445423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.445453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.445811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.446182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.446213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.446594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.446962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.446990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.447383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.447731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.447759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.448140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.448355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.448384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.448739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.448980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.449009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.992 [2024-05-16 09:48:58.449395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.449806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.992 [2024-05-16 09:48:58.449837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.992 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.450181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.450541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.450570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.450909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.451293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.451330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.451693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.451901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.451928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.452280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.452640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.452671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.453018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.453378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.453410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.453792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.454156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.454186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.454572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.454933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.454963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.455191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.455459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.455488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.455845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.456042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.456092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.456479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.456843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.456872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.457225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.457439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.457468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.457681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.458045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.458088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.458338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.458714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.458742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.458956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.459289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.459320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.459653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.460042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.460094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.460327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.460702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.460731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.461108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.461492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.461521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.461902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.462238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.462268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.462640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.462868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.462902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.463212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.463610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.463639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.464005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.464414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.464445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.464789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.465174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.465204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.465545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.465920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.465949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.466176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.466544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.466575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.466938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.467278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.467310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.467720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.468082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.468114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.468474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.468839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.468868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.469102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.469457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.469486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.469851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.470074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.470105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.993 qpair failed and we were unable to recover it. 00:36:04.993 [2024-05-16 09:48:58.470462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.993 [2024-05-16 09:48:58.470678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.470706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.471077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.471439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.471469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.471813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.472171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.472202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.472606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.472863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.472894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.473266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.473478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.473510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.473866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.474217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.474251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.474613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.474946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.474977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.475348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.475723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.475752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.475982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.476340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.476371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.476621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.476990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.477020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.477400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.477750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.477780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.478164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.478532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.478561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.478939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.479317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.479348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.479707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.480031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.480075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.480430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.480790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.480820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.481188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.481405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.481434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.481808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.482168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.482198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.482422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.482793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.482823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.483216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.483594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.483623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.483980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.484342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.484374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.484600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.484978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.485007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.485370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.485743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.485772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.486146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.486503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.486533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.486896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.487240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.487271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.487675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.487765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.487794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.488048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.488271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.488302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.488732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.488835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.488864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.489288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.489711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.489742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.489985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.490337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.490368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.994 [2024-05-16 09:48:58.490657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.491038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.994 [2024-05-16 09:48:58.491079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.994 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.491348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.491557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.491586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.491810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.492195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.492226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.492624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.492885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.492914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.493300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.493668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.493697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.494090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.494482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.494512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.494893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.495263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.495293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.495506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.495820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.495851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.496217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.496544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.496573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.496788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.497160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.497190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.497458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.497663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.497693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.498091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.498454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.498485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.498809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.499160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.499189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.499543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.499915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.499944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.500321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.500764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.500793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.501189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.501431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.501459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.501711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.502042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.502100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.502506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.502718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.502746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.503109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.503321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.503349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.503707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.503952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.503980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.504343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.504721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.504751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.505117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.505345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.505373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.505657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.505861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.505889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.506139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.506377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.506405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.506537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.506869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.506897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.995 [2024-05-16 09:48:58.507112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.507328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.995 [2024-05-16 09:48:58.507357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.995 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.507483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.507821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.507852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.508283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.508608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.508638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.508988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.509125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.509153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.509404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.509718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.509746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.510096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.510313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.510343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.510708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.510976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.511003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.511387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.511763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.511792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.512183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.512595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.512624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.513000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.513323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.513357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.513749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.514113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.514142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.514385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.514770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.514799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.515195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.515534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.515562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.516007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.516225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.516257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.516603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.516981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.517012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.517382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.517615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.517643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.518028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.518389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.518421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.518651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.519037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.519082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.519461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.519806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.519834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.520196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.520381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.520410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.520685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.520880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.520911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.521147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.521375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.521403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.521746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.522108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.522140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.522519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.522852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.522880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:04.996 [2024-05-16 09:48:58.523256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.523631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:04.996 [2024-05-16 09:48:58.523661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:04.996 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.524027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.524410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.524441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.524700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.525078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.525110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.525337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.525658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.525687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.526096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.526351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.526380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.526787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.527011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.527039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.527301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.527626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.527657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.527891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.528240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.528271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.528661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.529028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.529072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.529427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.529811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.529840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.530220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.530591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.530620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.530977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.531326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.531359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.531709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.531942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.531970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.532327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.532708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.532738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.533110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.533362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.533394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.533753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.534119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.534153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.534553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.534876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.534906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.535290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.535506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.535536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.535892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.536239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.536269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.536669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.537039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.537080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.537489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.537855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.537883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.538234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.538608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.538637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.538962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.539181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.277 [2024-05-16 09:48:58.539211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.277 qpair failed and we were unable to recover it. 00:36:05.277 [2024-05-16 09:48:58.539444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.539811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.539841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.540047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.540293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.540323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.540698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.541032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.541079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.541172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.541418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.541446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.541735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.542091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.542123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.542527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.542760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.542790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.542887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.543211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.543241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.543537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.543943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.543972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.544331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.544690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.544719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.545095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.545453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.545483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.545834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.546184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.546216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.546640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.547005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.547036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.547435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.547658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.547693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.547897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.548240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.548269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.548511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.548864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.548893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.549280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.549620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.549651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.549859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.550224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.550256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.550657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.551018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.551047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.551289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.551641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.551671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.552064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.552420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.552450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.552814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.553181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.553211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.553582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.553939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.553968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.554340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.554688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.554723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.555092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.555437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.555466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.555844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.556197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.556228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.556565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.556780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.556810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.278 [2024-05-16 09:48:58.557039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.557443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.278 [2024-05-16 09:48:58.557472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.278 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.557861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.558233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.558264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.558513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.558882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.558911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.559269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.559646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.559675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.560036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.560445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.560477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.560847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.561199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.561230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.561604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.561960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.561996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.562372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.562740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.562771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.563137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.563515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.563543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.563924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.564245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.564274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.564654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.564880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.564908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.565272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.565639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.565668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.566023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.566363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.566392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.566607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.566975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.567004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.567376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.567736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.567765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.568114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.568316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.568345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.568701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.568910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.568939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.569163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.569408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.569436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.569750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.570116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.570147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.570512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.570864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.570894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.571111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.571396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.571425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.571787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.572150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.572181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.572568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.572800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.572830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.573170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.573410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.573441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.573799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.574142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.574174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.574552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.574915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.574943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.575319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.575691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.575720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.279 [2024-05-16 09:48:58.576104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.576391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.279 [2024-05-16 09:48:58.576418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.279 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.576771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.577128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.577159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.577536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.577900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.577929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.578317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.578678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.578707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.579080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.579441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.579470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.579903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.580226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.580256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.580645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.581018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.581048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.581453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.581820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.581849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.582202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.582560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.582588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.582950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.583157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.583185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.583565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.583899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.583929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.584291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.584611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.584641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.585022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.585356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.585387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.585612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.585836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.585865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.586125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.586214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.586242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.586596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.586928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.586957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.587201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.587574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.587604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.587970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.588327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.588359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.588743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.588962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.588991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.589208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.589519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.589548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.589945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.590354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.590386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.590595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.590962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.590991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.591352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.591747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.591777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.592133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.592389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.592418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.592763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.593139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.593168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.593571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.593940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.593969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.594313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.594674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.594704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.280 qpair failed and we were unable to recover it. 00:36:05.280 [2024-05-16 09:48:58.595143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.280 [2024-05-16 09:48:58.595524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.595554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.595928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.596186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.596216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.596572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.596950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.596980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.597229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.597461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.597491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.597868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.598239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.598271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.598642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.599026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.599153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.599390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.599641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.599672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.600075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.600434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.600462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.600830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.601034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.601075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.601330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.601621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.601648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.601858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.602075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.602106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.602358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.602725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.602754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.603127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.603526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.603557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.603969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.604350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.604382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.604766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.605126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.605156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.605398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.605768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.605798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.606027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.606318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.606350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.606599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.606905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.606934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.607176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.607440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.607469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.607872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.608265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.608296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.608656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.609022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.609063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.609283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.609537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.609568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.609931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.610278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.610309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.610714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.611092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.611125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.611510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.611861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.611892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.281 qpair failed and we were unable to recover it. 00:36:05.281 [2024-05-16 09:48:58.612259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.281 [2024-05-16 09:48:58.612471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.612500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.612712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.612979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.613006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.613386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.613755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.613785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.614094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.614447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.614477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.614822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.615177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.615209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.615568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.615907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.615936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.616276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.616511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.616542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.616929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.617297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.617328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.617547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.617834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.617863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.618255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.618613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.618643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.618876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.619205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.619235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.619592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.619797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.619824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.620174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.620378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.620407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.620771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.621007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.621035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.621292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.621665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.621695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.621941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.622291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.622321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.622697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.623021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.623050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.623401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.623761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.623793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.624169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.624557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.624587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.624945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.625316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.625348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.625719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.626087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.626117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.626339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.626555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.626584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.626731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.627124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.627154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.627545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.627639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.282 [2024-05-16 09:48:58.627664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.282 qpair failed and we were unable to recover it. 00:36:05.282 [2024-05-16 09:48:58.627911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.628164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.628197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.628435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.628641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.628670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.628781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.628964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.628993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.629364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.629730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.629759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.629959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.630191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.630221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.630568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.630927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.630956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.631320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.631681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.631712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.632084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.632294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.632322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.632662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.632985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.633015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.633408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.633771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.633801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.634167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.634539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.634569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.634973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.635314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.635344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.635585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.635974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.636004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.636408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.636772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.636803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.637173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.637539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.637568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.637932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.638336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.638366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.638747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.638957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.638985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.639368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.639734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.639764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.639986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.640355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.640386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.640792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.641150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.641182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.641400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.641766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.641796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.642028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.642376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.642407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.642785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.643140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.643169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.643375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.643726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.643756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.644124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.644354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.644384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.644742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.645105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.645136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.645303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.645643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.645673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.283 qpair failed and we were unable to recover it. 00:36:05.283 [2024-05-16 09:48:58.646074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.283 [2024-05-16 09:48:58.646397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.646427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.646648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.646775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.646801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.647029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.647303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.647336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.647657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.647889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.647916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.648156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.648356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.648384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.648565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.648656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.648683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.649032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.649364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.649395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.649606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.649954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.649988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.650336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.650721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.650751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.651097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.651349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.651377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.651730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.652087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.652117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.652353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.652708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.652738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.653100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.653482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.653511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.653865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.654230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.654262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.654620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.654964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.654994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.655373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.655734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.655763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.656124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.656501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.656529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.656873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.657242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.657279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.657528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.657899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.657929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.658286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.658644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.658674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.659041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.659265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.659295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.659678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.660031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.660085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.660443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.660799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.660829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.661212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.661464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.661494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.661746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.662042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.662086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.662492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.662889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.662919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.663300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.663680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.284 [2024-05-16 09:48:58.663710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.284 qpair failed and we were unable to recover it. 00:36:05.284 [2024-05-16 09:48:58.664091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.664501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.664537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.664889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.665229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.665260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.665650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.666012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.666042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.666263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.666599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.666629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.666865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.667237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.667268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.667640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.668029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.668088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.668340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.668695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.668725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.669081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.669467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.669498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.669758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.670125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.670160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.670387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.670777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.670808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.671176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.671367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.671401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.671626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.671941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.671970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.672309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.672682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.672711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.672957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.673189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.673221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.673614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.673821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.673853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.674219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.674568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.674597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.674943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.675352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.675383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.675741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.676103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.676135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.676367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.676739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.676767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.677134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.677518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.677546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.677930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.678168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.678201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.678564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.678952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.678981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.679193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.679499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.679530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.679884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.680269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.680301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.680687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.680894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.680923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.681283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.681660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.681690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.681912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.682277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.682308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.285 qpair failed and we were unable to recover it. 00:36:05.285 [2024-05-16 09:48:58.682544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.682945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.285 [2024-05-16 09:48:58.682975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.683314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.683678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.683707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.684078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.684439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.684469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.684704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.685077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.685109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.685523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.685896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.685926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.686149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.686561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.686592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.686949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.687327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.687359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.687723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.688097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.688127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.688518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.688880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.688910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.689280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.689496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.689524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.689879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.690260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.690290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.690683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.690775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.690800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.691134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.691518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.691549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.691674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.692046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.692101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.692322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.692709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.692739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.692946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.693282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.693313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.693538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.693911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.693941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.694191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.694558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.694589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.694944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.695207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.695238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.695611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.695975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.696005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.696348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.696554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.696584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.696979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.697339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.697371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.286 qpair failed and we were unable to recover it. 00:36:05.286 [2024-05-16 09:48:58.697741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.698110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.286 [2024-05-16 09:48:58.698143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.698530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.698908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.698938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.699208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.699433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.699463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.699799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.700213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.700244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.700585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.700963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.700992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.701366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.701735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.701764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.702188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.702614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.702644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.703025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.703401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.703431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.703811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.704182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.704215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.704464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.704829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.704862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.705083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.705472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.705503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.705720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.705936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.705968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.706178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.706493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.706523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.706863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.707107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.707139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.707461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.707693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.707722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.708096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.708443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.708475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.708900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.709236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.709268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.709658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.710013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.710043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.710454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.710811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.710842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.711201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.711575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.711605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.711974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.712324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.712355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.712736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.713105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.713136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.713560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.713765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.713793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.714200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.714589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.714618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.714980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.715185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.715214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.715611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.715833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.287 [2024-05-16 09:48:58.715864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.287 qpair failed and we were unable to recover it. 00:36:05.287 [2024-05-16 09:48:58.716207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.716575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.716603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.716838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.717202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.717234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.717595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.718024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.718065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.718455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.718813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.718844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.719085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.719360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.719391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.719737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.720112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.720144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.720534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.720899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.720929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.721289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.721630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.721659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.722026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.722366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.722397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.722619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.722977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.723007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.723228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.723597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.723627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.723860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.724252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.724282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.724727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.724928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.724962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.725318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.725527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.725555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.725784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.726008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.726039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.726376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.726461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.726486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.726827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.727075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.727108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.727353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.727554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.727584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.727817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.728173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.728204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.728438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.728799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.728828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.729082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.729454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.729483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.729861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.730253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.730285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.730639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.731020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.731050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.731416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.731651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.731680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.732046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.732412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.732443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.732814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.733035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.733082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.733514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.733877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.733906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.288 qpair failed and we were unable to recover it. 00:36:05.288 [2024-05-16 09:48:58.734277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.288 [2024-05-16 09:48:58.734489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.734517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.734854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.735218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.735250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.735476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.735808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.735838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.736220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.736582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.736613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.736960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.737338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.737368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.737735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.738104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.738135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.738508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.738869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.738899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.739280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.739644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.739674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.739897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.740215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.740246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.740476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.740891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.740921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.741306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.741638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.741670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.742075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.742312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.742342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.742743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.742952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.742981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.743417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.743812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.743843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.744194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.744401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.744430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.744789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.745161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.745191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.745455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.745810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.745839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.746192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.746549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.746580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.746956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.747335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.747366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.747707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.748077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.748107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.748481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.748697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.748725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.749079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.749317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.749351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.749703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.750074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.750108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.750239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.750471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.750500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.750886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.751234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.751266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.751482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.751921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.751950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.752195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.752439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.752469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.289 [2024-05-16 09:48:58.752563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.752757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.289 [2024-05-16 09:48:58.752786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.289 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.752881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.753120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.753151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.753530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.753617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.753648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.753998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.754343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.754373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.754743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.755102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.755133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.755517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.755883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.755912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.756270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.756647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.756677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.756920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.757233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.757264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.757620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.758007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.758037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.758264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.758637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.758665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.759029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.759405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.759436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.759825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.760194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.760225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.760477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.760832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.760867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.761098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.761451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.761480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.761864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.762236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.762267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.762500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.762763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.762791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.763180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.763583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.763613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.763947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.764333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.764365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.764719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.765097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.765126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.765338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.765711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.765739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.765951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.766260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.766291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.766516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.766891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.766920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.767339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.767698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.767734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.768099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.768332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.768361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.768720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.769083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.769113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.769475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.769833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.769862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.770221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.770433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.770465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.290 qpair failed and we were unable to recover it. 00:36:05.290 [2024-05-16 09:48:58.770714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.771051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.290 [2024-05-16 09:48:58.771096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.771450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.771814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.771844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.772185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.772562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.772591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.772827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.772911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.772939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.773234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.773468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.773500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.773870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.774074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.774112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.774398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.774752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.774782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.775090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.775472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.775502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.775723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.776097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.776130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.776347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.776730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.776760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.777098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.777465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.777494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.777862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.778102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.778133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.778516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.778862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.778892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.779234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.779625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.779655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.779877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.780234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.780265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.780629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.780982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.781013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.781297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.781683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.781714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.782090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.782486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.782518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.782756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.783081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.783112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.783484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.783852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.783882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.784097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.784416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.784446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.784816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.785182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.785213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.785586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.786011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.291 [2024-05-16 09:48:58.786042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.291 qpair failed and we were unable to recover it. 00:36:05.291 [2024-05-16 09:48:58.786445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.786816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.786847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.787224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.787580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.787609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.787828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.788143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.788174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.788578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.788936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.788967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.789259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.789607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.789636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.790015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.790352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.790383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.790763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.791134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.791165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.791513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.791872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.791901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.792282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.792506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.792537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.792900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.793242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.793271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.793651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.793872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.793904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.794130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.794530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.794562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.794926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.795195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.795226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.795616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.795980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.796011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.796360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.796592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.796622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.796848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.797185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.797216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.797440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.797791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.797823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.798214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.798461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.798490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.798906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.799124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.799154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.799547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.799914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.799946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.800300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.800515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.800543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.800764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.800966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.800997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.801182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.801448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.801479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.801753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.801965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.801995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.802216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.802593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.802625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.802835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.803180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.803213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.803429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.803652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.803682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.292 qpair failed and we were unable to recover it. 00:36:05.292 [2024-05-16 09:48:58.804032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.292 [2024-05-16 09:48:58.804280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.804311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.804668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.805042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.805094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.805508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.805744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.805775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.806143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.806523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.806553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.806923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.807254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.807289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.807648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.808012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.808043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.808425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.808788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.808819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.809187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.809552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.809584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.809958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.810303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.810334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.810692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.810900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.810929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.811280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.811651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.811681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.812125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.812519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.812549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.812918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.813256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.813287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.813510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.813892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.813921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.814281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.814611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.814641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.815017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.815355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.815387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.815764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.816141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.816172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.816532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.816740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.816767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.816997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.817352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.817380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.817600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.817985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.818013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.818408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.818769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.818800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.819069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.819438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.819471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.819680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.819887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.819916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.820133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.820467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.820498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.820701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.820911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.293 [2024-05-16 09:48:58.820938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.293 qpair failed and we were unable to recover it. 00:36:05.293 [2024-05-16 09:48:58.821283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.821489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.821522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.821949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.822153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.822182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.822281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.822595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.822624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.823008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.823367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.823398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.823770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.824139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.824170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.824572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.824942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.824972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.825194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.825415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.825447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.825685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.826038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.826092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.826444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.826797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.826827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.827200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.827458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.827486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.827705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.827942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.827970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.828237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.828596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.828627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.829009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.829389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.829420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.829790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.829997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.830026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.830449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.830678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.830706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.830940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.831306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.831337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.831772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.832090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.832121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.832527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.832741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.832770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.832992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.833352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.833381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.833758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.834128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.834159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.834383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.834603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.834636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.834856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.835076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.835107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.835468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.835716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.835743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.617 qpair failed and we were unable to recover it. 00:36:05.617 [2024-05-16 09:48:58.836089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.617 [2024-05-16 09:48:58.836320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.836348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.836730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.837099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.837130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.837492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.837863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.837892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.838269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.838476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.838506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.838746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.838987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.839016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.839436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.839803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.839833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.840209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.840417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.840447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.840805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.841164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.841198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.841578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.841796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.841825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.842162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.842535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.842564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.842943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.843283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.843314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.843673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.843893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.843923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.844287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.844655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.844685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.845072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.845476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.845505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.845877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.846238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.846268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.846642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.846873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.846904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.847277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.847507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.847534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.847887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.848120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.848150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.848528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.848889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.848918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.849288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.849662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.849692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.850085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.850336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.850366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.850781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.851132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.851162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.851536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.851738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.851765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.851984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.852226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.852259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.852625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.852987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.853017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.853245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.853459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.853488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.853850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.854226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.854257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.618 [2024-05-16 09:48:58.854627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 [2024-05-16 09:48:58.854943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.618 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:05.618 [2024-05-16 09:48:58.854973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.618 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.855246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:36:05.619 [2024-05-16 09:48:58.855460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.855489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:05.619 [2024-05-16 09:48:58.855713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:05.619 [2024-05-16 09:48:58.855954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.855982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:05.619 [2024-05-16 09:48:58.856247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.856582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.856611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.856964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.857307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.857338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.857697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.858067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.858097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.858496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.858831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.858861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.859081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.859311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.859346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.859695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.860020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.860049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.860431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.860799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.860830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.861200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.861568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.861599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.861943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.862284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.862314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.862539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.862913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.862941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.863180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.863395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.863425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.863775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.864176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.864206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.864591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.864956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.864988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.865372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.865726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.865757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.866133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.866511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.866539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.866896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.867123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.867152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.867496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.867707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.867737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.868081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.868448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.868478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.868876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.869236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.869266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.869634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.870021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.870050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.870422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.870644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.870673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.871031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.871413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.871443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.871821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.872074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.872104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.619 qpair failed and we were unable to recover it. 00:36:05.619 [2024-05-16 09:48:58.872524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.872882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.619 [2024-05-16 09:48:58.872911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.873185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.873590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.873619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.873846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.874172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.874203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.874445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.874843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.874873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.875238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.875450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.875482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.875845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.876207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.876239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.876616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.876860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.876890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.877250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.877641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.877672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.877888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.878205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.878236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.878470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.878710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.878743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.879094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.879489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.879518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.879889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.880246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.880277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.880639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.881042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.881086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.881496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.881727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.881758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.882118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.882357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.882387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.882659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.883014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.883043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.883285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.883495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.883524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.883754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.884114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.884145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.884380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.884695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.884728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.885077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.885446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.885476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.885844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.886069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.886101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.886477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.886870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.886901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.887253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.887612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.887644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.887994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.888207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.888237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.888609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.888980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.889011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.889398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.889616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.889646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.889857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.890243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.890273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.620 qpair failed and we were unable to recover it. 00:36:05.620 [2024-05-16 09:48:58.890672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.620 [2024-05-16 09:48:58.891032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.891088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.891441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.891662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.891691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.892078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.892291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.892323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.892676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.893040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.893089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.893501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.893714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.893744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.894099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.894460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.894490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.894868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.895274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.895305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.895519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.895774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.895808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.896179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.896548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.896578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.896814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.897061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.897093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.897447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:05.621 [2024-05-16 09:48:58.897820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.897852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:05.621 [2024-05-16 09:48:58.898260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.898474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.621 [2024-05-16 09:48:58.898506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:05.621 [2024-05-16 09:48:58.898754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.898997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.899029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.899430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.899794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.899824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.900221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.900462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.900491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.900725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.900832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.900860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.901295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.901636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.901665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.901912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.902250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.902282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.902672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.902884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.902913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.903131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.903503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.903533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.903903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.904124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.904155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.621 qpair failed and we were unable to recover it. 00:36:05.621 [2024-05-16 09:48:58.904527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.904893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.621 [2024-05-16 09:48:58.904922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.905284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.905648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.905679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.906038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.906478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.906509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.906847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.907234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.907267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.907628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.908003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.908033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.908294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.908543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.908575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.908928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.909291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.909322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.909705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.910076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.910108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.910472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.910802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.910833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.911207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.911573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.911603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.911822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.912185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.912219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.912599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.912960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.912988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.913216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.913418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.913447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.913825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.914159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.914191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.914427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.914821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.914851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.915101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.915492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.915521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.915879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.916238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.916269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.916524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.916899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.916930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.917170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.917495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.917526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.917759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.918111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.918142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.918518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.918880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.918910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.919263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.919619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.919649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.920000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.920213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.920243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.920646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.920849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.920877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.921236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.921597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.921627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 [2024-05-16 09:48:58.921965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.922339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.922368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.622 Malloc0 00:36:05.622 [2024-05-16 09:48:58.922652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.922891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.622 [2024-05-16 09:48:58.922919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.622 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.923188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.623 [2024-05-16 09:48:58.923572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.923602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:05.623 [2024-05-16 09:48:58.923968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.623 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:05.623 [2024-05-16 09:48:58.924332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.924363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.924587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.924997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.925026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.925425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.925788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.925817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.926204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.926558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.926587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.926991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.927335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.927366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.927623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.927994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.928024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.928423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.928800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.928830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.929213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.929415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.929442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.929811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.929808] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:05.623 [2024-05-16 09:48:58.930028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.930074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.930434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.930809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.930840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.931218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.931437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.931467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.931622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.931896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.931925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.932319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.932551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.932579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.932958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.933367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.933397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.933620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.933852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.933882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.934001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.934255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.934284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.934666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.934905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.934933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.935305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.935533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.935562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.935909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.936244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.936274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.936497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.936896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.936925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.937018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.937262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.937292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.937674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.938037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.938078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 [2024-05-16 09:48:58.938323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.938680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.938710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.623 qpair failed and we were unable to recover it. 00:36:05.623 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.623 [2024-05-16 09:48:58.939088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:05.623 [2024-05-16 09:48:58.939537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.623 [2024-05-16 09:48:58.939568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.624 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:05.624 [2024-05-16 09:48:58.939923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.940283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.940320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.940705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.941094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.941124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.941509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.941875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.941904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.942289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.942652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.942681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.943035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.943318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.943347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.943715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.943824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.943853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.944094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.944313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.944342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.944668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.945026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.945068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.945310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.945663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.945692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.946037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.946316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.946344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.946706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.946829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.946862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.946967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.947326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.947358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.947720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.948100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.948131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.948527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.948899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.948929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.949362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.949726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.949758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.950095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.950486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.950515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.950747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.950971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.951001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.624 [2024-05-16 09:48:58.951254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:05.624 [2024-05-16 09:48:58.951625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.951654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.624 [2024-05-16 09:48:58.952015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:05.624 [2024-05-16 09:48:58.952378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.952411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.952835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.953223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.953254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.953537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.953894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.953925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.954295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.954660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.954691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.955068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.955407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.955439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.955797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.956014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.956043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.624 [2024-05-16 09:48:58.956466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.956821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.624 [2024-05-16 09:48:58.956850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.624 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.957220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.957616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.957645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.958013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.958382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.958413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.958782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.959135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.959165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.959551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.959911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.959939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.960202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.960565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.960602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.960965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.961211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.961241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.961481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.961785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.961813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.962186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.962575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.962605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.625 [2024-05-16 09:48:58.962952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.963286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.963318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:05.625 [2024-05-16 09:48:58.963555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.625 [2024-05-16 09:48:58.963906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:05.625 [2024-05-16 09:48:58.963935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.964302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.964705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.964734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.964945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.965067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.965100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.965565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.965678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.965705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.965928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.966273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.966304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.966707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.966929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.966957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.967210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.967436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.967465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.967816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.968184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.968214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.968599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.968823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.968850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.969108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.969520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.969549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0840000b90 with addr=10.0.0.2, port=4420 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.969898] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:05.625 [2024-05-16 09:48:58.969917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.625 [2024-05-16 09:48:58.970248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:05.625 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.625 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:05.625 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.625 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:05.625 [2024-05-16 09:48:58.976579] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:36:05.625 [2024-05-16 09:48:58.976694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f0840000b90 (107): Transport endpoint is not connected 00:36:05.625 [2024-05-16 09:48:58.976815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 [2024-05-16 09:48:58.980713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.625 [2024-05-16 09:48:58.980857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.625 [2024-05-16 09:48:58.980917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.625 [2024-05-16 09:48:58.980941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.625 [2024-05-16 09:48:58.980960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.625 [2024-05-16 09:48:58.981008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.625 qpair failed and we were unable to recover it. 00:36:05.625 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.625 09:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 538924 00:36:05.625 [2024-05-16 09:48:58.990518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.625 [2024-05-16 09:48:58.990613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.626 [2024-05-16 09:48:58.990649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.626 [2024-05-16 09:48:58.990664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.626 [2024-05-16 09:48:58.990678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.626 [2024-05-16 09:48:58.990710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.626 qpair failed and we were unable to recover it. 00:36:05.626 [2024-05-16 09:48:59.000480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.626 [2024-05-16 09:48:59.000569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.626 [2024-05-16 09:48:59.000596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.626 [2024-05-16 09:48:59.000609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.626 [2024-05-16 09:48:59.000618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.626 [2024-05-16 09:48:59.000643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.626 qpair failed and we were unable to recover it. 00:36:05.626 [2024-05-16 09:48:59.010538] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.626 [2024-05-16 09:48:59.010621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.626 [2024-05-16 09:48:59.010643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.626 [2024-05-16 09:48:59.010652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.626 [2024-05-16 09:48:59.010659] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.626 [2024-05-16 09:48:59.010676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.626 qpair failed and we were unable to recover it. 00:36:05.626 [2024-05-16 09:48:59.020626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.626 [2024-05-16 09:48:59.020698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.626 [2024-05-16 09:48:59.020719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.626 [2024-05-16 09:48:59.020732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.626 [2024-05-16 09:48:59.020739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.626 [2024-05-16 09:48:59.020756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.626 qpair failed and we were unable to recover it. 00:36:05.626 [2024-05-16 09:48:59.030560] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.626 [2024-05-16 09:48:59.030634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.626 [2024-05-16 09:48:59.030657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.626 [2024-05-16 09:48:59.030666] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.626 [2024-05-16 09:48:59.030674] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.626 [2024-05-16 09:48:59.030692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.626 qpair failed and we were unable to recover it. 00:36:05.626 [2024-05-16 09:48:59.040541] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.626 [2024-05-16 09:48:59.040608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.626 [2024-05-16 09:48:59.040629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.626 [2024-05-16 09:48:59.040637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.626 [2024-05-16 09:48:59.040644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.626 [2024-05-16 09:48:59.040661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.626 qpair failed and we were unable to recover it. 00:36:05.626 [2024-05-16 09:48:59.050608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.626 [2024-05-16 09:48:59.050688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.626 [2024-05-16 09:48:59.050710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.626 [2024-05-16 09:48:59.050718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.626 [2024-05-16 09:48:59.050725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.626 [2024-05-16 09:48:59.050743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.626 qpair failed and we were unable to recover it. 00:36:05.626 [2024-05-16 09:48:59.060664] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.626 [2024-05-16 09:48:59.060730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.626 [2024-05-16 09:48:59.060751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.626 [2024-05-16 09:48:59.060758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.626 [2024-05-16 09:48:59.060766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.626 [2024-05-16 09:48:59.060782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.626 qpair failed and we were unable to recover it. 00:36:05.626 [2024-05-16 09:48:59.070654] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.626 [2024-05-16 09:48:59.070731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.626 [2024-05-16 09:48:59.070768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.626 [2024-05-16 09:48:59.070778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.626 [2024-05-16 09:48:59.070785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.626 [2024-05-16 09:48:59.070807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.626 qpair failed and we were unable to recover it. 00:36:05.626 [2024-05-16 09:48:59.080643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.626 [2024-05-16 09:48:59.080715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.626 [2024-05-16 09:48:59.080751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.626 [2024-05-16 09:48:59.080761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.626 [2024-05-16 09:48:59.080769] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.626 [2024-05-16 09:48:59.080792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.626 qpair failed and we were unable to recover it. 00:36:05.626 [2024-05-16 09:48:59.090690] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.626 [2024-05-16 09:48:59.090768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.626 [2024-05-16 09:48:59.090805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.627 [2024-05-16 09:48:59.090815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.627 [2024-05-16 09:48:59.090824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.627 [2024-05-16 09:48:59.090846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.627 qpair failed and we were unable to recover it. 00:36:05.627 [2024-05-16 09:48:59.100766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.627 [2024-05-16 09:48:59.100839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.627 [2024-05-16 09:48:59.100875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.627 [2024-05-16 09:48:59.100886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.627 [2024-05-16 09:48:59.100895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.627 [2024-05-16 09:48:59.100918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.627 qpair failed and we were unable to recover it. 00:36:05.627 [2024-05-16 09:48:59.110785] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.627 [2024-05-16 09:48:59.110904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.627 [2024-05-16 09:48:59.110940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.627 [2024-05-16 09:48:59.110949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.627 [2024-05-16 09:48:59.110957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.627 [2024-05-16 09:48:59.110977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.627 qpair failed and we were unable to recover it. 00:36:05.627 [2024-05-16 09:48:59.120813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.627 [2024-05-16 09:48:59.120886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.627 [2024-05-16 09:48:59.120910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.627 [2024-05-16 09:48:59.120918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.627 [2024-05-16 09:48:59.120924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.627 [2024-05-16 09:48:59.120943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.627 qpair failed and we were unable to recover it. 00:36:05.627 [2024-05-16 09:48:59.130873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.627 [2024-05-16 09:48:59.130948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.627 [2024-05-16 09:48:59.130970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.627 [2024-05-16 09:48:59.130978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.627 [2024-05-16 09:48:59.130985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.627 [2024-05-16 09:48:59.131002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.627 qpair failed and we were unable to recover it. 00:36:05.627 [2024-05-16 09:48:59.140893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.627 [2024-05-16 09:48:59.140955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.627 [2024-05-16 09:48:59.140975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.627 [2024-05-16 09:48:59.140983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.627 [2024-05-16 09:48:59.140990] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.627 [2024-05-16 09:48:59.141007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.627 qpair failed and we were unable to recover it. 00:36:05.900 [2024-05-16 09:48:59.150902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.900 [2024-05-16 09:48:59.150974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.900 [2024-05-16 09:48:59.150994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.900 [2024-05-16 09:48:59.151003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.900 [2024-05-16 09:48:59.151010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.900 [2024-05-16 09:48:59.151034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.900 qpair failed and we were unable to recover it. 00:36:05.900 [2024-05-16 09:48:59.160950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.900 [2024-05-16 09:48:59.161018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.900 [2024-05-16 09:48:59.161039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.900 [2024-05-16 09:48:59.161047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.900 [2024-05-16 09:48:59.161062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.900 [2024-05-16 09:48:59.161081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.900 qpair failed and we were unable to recover it. 00:36:05.900 [2024-05-16 09:48:59.171002] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.900 [2024-05-16 09:48:59.171125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.900 [2024-05-16 09:48:59.171146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.900 [2024-05-16 09:48:59.171155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.900 [2024-05-16 09:48:59.171163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.900 [2024-05-16 09:48:59.171180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.900 qpair failed and we were unable to recover it. 00:36:05.901 [2024-05-16 09:48:59.181025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.901 [2024-05-16 09:48:59.181091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.901 [2024-05-16 09:48:59.181111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.901 [2024-05-16 09:48:59.181120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.901 [2024-05-16 09:48:59.181127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.901 [2024-05-16 09:48:59.181144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.901 qpair failed and we were unable to recover it. 00:36:05.901 [2024-05-16 09:48:59.191046] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.901 [2024-05-16 09:48:59.191122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.901 [2024-05-16 09:48:59.191142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.901 [2024-05-16 09:48:59.191150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.901 [2024-05-16 09:48:59.191157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.901 [2024-05-16 09:48:59.191174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.901 qpair failed and we were unable to recover it. 00:36:05.901 [2024-05-16 09:48:59.201074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.901 [2024-05-16 09:48:59.201139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.901 [2024-05-16 09:48:59.201165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.901 [2024-05-16 09:48:59.201174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.901 [2024-05-16 09:48:59.201181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.901 [2024-05-16 09:48:59.201198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.901 qpair failed and we were unable to recover it. 00:36:05.901 [2024-05-16 09:48:59.211126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.901 [2024-05-16 09:48:59.211210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.901 [2024-05-16 09:48:59.211231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.901 [2024-05-16 09:48:59.211239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.901 [2024-05-16 09:48:59.211246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.901 [2024-05-16 09:48:59.211265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.901 qpair failed and we were unable to recover it. 00:36:05.901 [2024-05-16 09:48:59.221135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.901 [2024-05-16 09:48:59.221211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.901 [2024-05-16 09:48:59.221232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.901 [2024-05-16 09:48:59.221240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.901 [2024-05-16 09:48:59.221247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.901 [2024-05-16 09:48:59.221264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.901 qpair failed and we were unable to recover it. 00:36:05.901 [2024-05-16 09:48:59.231178] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.901 [2024-05-16 09:48:59.231246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.901 [2024-05-16 09:48:59.231266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.901 [2024-05-16 09:48:59.231275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.901 [2024-05-16 09:48:59.231282] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.901 [2024-05-16 09:48:59.231301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.901 qpair failed and we were unable to recover it. 00:36:05.901 [2024-05-16 09:48:59.241180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.901 [2024-05-16 09:48:59.241248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.901 [2024-05-16 09:48:59.241268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.901 [2024-05-16 09:48:59.241277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.901 [2024-05-16 09:48:59.241284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.901 [2024-05-16 09:48:59.241307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.901 qpair failed and we were unable to recover it. 00:36:05.901 [2024-05-16 09:48:59.251244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.901 [2024-05-16 09:48:59.251373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.901 [2024-05-16 09:48:59.251399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.901 [2024-05-16 09:48:59.251410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.901 [2024-05-16 09:48:59.251417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.901 [2024-05-16 09:48:59.251436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.901 qpair failed and we were unable to recover it. 00:36:05.901 [2024-05-16 09:48:59.261384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.901 [2024-05-16 09:48:59.261473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.901 [2024-05-16 09:48:59.261494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.901 [2024-05-16 09:48:59.261502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.901 [2024-05-16 09:48:59.261510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.901 [2024-05-16 09:48:59.261528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.901 qpair failed and we were unable to recover it. 00:36:05.901 [2024-05-16 09:48:59.271388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.901 [2024-05-16 09:48:59.271463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.901 [2024-05-16 09:48:59.271483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.901 [2024-05-16 09:48:59.271491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.901 [2024-05-16 09:48:59.271498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.901 [2024-05-16 09:48:59.271517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.901 qpair failed and we were unable to recover it. 00:36:05.901 [2024-05-16 09:48:59.281379] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.901 [2024-05-16 09:48:59.281442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.901 [2024-05-16 09:48:59.281462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.901 [2024-05-16 09:48:59.281471] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.901 [2024-05-16 09:48:59.281477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.901 [2024-05-16 09:48:59.281494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.901 qpair failed and we were unable to recover it. 00:36:05.901 [2024-05-16 09:48:59.291476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.901 [2024-05-16 09:48:59.291556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.901 [2024-05-16 09:48:59.291576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.902 [2024-05-16 09:48:59.291583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.902 [2024-05-16 09:48:59.291590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.902 [2024-05-16 09:48:59.291606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.902 qpair failed and we were unable to recover it. 00:36:05.902 [2024-05-16 09:48:59.301433] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.902 [2024-05-16 09:48:59.301488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.902 [2024-05-16 09:48:59.301508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.902 [2024-05-16 09:48:59.301516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.902 [2024-05-16 09:48:59.301523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.902 [2024-05-16 09:48:59.301539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.902 qpair failed and we were unable to recover it. 00:36:05.902 [2024-05-16 09:48:59.311453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.902 [2024-05-16 09:48:59.311514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.902 [2024-05-16 09:48:59.311534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.902 [2024-05-16 09:48:59.311541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.902 [2024-05-16 09:48:59.311548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.902 [2024-05-16 09:48:59.311565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.902 qpair failed and we were unable to recover it. 00:36:05.902 [2024-05-16 09:48:59.321436] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.902 [2024-05-16 09:48:59.321499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.902 [2024-05-16 09:48:59.321520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.902 [2024-05-16 09:48:59.321528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.902 [2024-05-16 09:48:59.321534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.902 [2024-05-16 09:48:59.321551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.902 qpair failed and we were unable to recover it. 00:36:05.902 [2024-05-16 09:48:59.331510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.902 [2024-05-16 09:48:59.331590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.902 [2024-05-16 09:48:59.331611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.902 [2024-05-16 09:48:59.331618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.902 [2024-05-16 09:48:59.331631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.902 [2024-05-16 09:48:59.331649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.902 qpair failed and we were unable to recover it. 00:36:05.902 [2024-05-16 09:48:59.341519] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.902 [2024-05-16 09:48:59.341586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.902 [2024-05-16 09:48:59.341607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.902 [2024-05-16 09:48:59.341615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.902 [2024-05-16 09:48:59.341624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.902 [2024-05-16 09:48:59.341642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.902 qpair failed and we were unable to recover it. 00:36:05.902 [2024-05-16 09:48:59.351513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.902 [2024-05-16 09:48:59.351574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.902 [2024-05-16 09:48:59.351594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.902 [2024-05-16 09:48:59.351602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.902 [2024-05-16 09:48:59.351609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.902 [2024-05-16 09:48:59.351625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.902 qpair failed and we were unable to recover it. 00:36:05.902 [2024-05-16 09:48:59.361575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.902 [2024-05-16 09:48:59.361642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.902 [2024-05-16 09:48:59.361662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.902 [2024-05-16 09:48:59.361670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.902 [2024-05-16 09:48:59.361677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.902 [2024-05-16 09:48:59.361693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.902 qpair failed and we were unable to recover it. 00:36:05.902 [2024-05-16 09:48:59.371605] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.902 [2024-05-16 09:48:59.371676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.902 [2024-05-16 09:48:59.371696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.902 [2024-05-16 09:48:59.371704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.902 [2024-05-16 09:48:59.371711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.902 [2024-05-16 09:48:59.371728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.902 qpair failed and we were unable to recover it. 00:36:05.902 [2024-05-16 09:48:59.381631] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.902 [2024-05-16 09:48:59.381703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.902 [2024-05-16 09:48:59.381730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.902 [2024-05-16 09:48:59.381738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.902 [2024-05-16 09:48:59.381746] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.902 [2024-05-16 09:48:59.381766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.902 qpair failed and we were unable to recover it. 00:36:05.902 [2024-05-16 09:48:59.391661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.902 [2024-05-16 09:48:59.391733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.902 [2024-05-16 09:48:59.391753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.902 [2024-05-16 09:48:59.391761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.902 [2024-05-16 09:48:59.391768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.902 [2024-05-16 09:48:59.391786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.902 qpair failed and we were unable to recover it. 00:36:05.902 [2024-05-16 09:48:59.401710] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.902 [2024-05-16 09:48:59.401775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.902 [2024-05-16 09:48:59.401795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.902 [2024-05-16 09:48:59.401803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.902 [2024-05-16 09:48:59.401810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.903 [2024-05-16 09:48:59.401827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.903 qpair failed and we were unable to recover it. 00:36:05.903 [2024-05-16 09:48:59.411788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.903 [2024-05-16 09:48:59.411881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.903 [2024-05-16 09:48:59.411918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.903 [2024-05-16 09:48:59.411927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.903 [2024-05-16 09:48:59.411935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.903 [2024-05-16 09:48:59.411959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.903 qpair failed and we were unable to recover it. 00:36:05.903 [2024-05-16 09:48:59.421774] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.903 [2024-05-16 09:48:59.421840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.903 [2024-05-16 09:48:59.421862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.903 [2024-05-16 09:48:59.421876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.903 [2024-05-16 09:48:59.421883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.903 [2024-05-16 09:48:59.421903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.903 qpair failed and we were unable to recover it. 00:36:05.903 [2024-05-16 09:48:59.431797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.903 [2024-05-16 09:48:59.431879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.903 [2024-05-16 09:48:59.431902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.903 [2024-05-16 09:48:59.431910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.903 [2024-05-16 09:48:59.431918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.903 [2024-05-16 09:48:59.431936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.903 qpair failed and we were unable to recover it. 00:36:05.903 [2024-05-16 09:48:59.441870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.903 [2024-05-16 09:48:59.441932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.903 [2024-05-16 09:48:59.441953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.903 [2024-05-16 09:48:59.441961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.903 [2024-05-16 09:48:59.441968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.903 [2024-05-16 09:48:59.441985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.903 qpair failed and we were unable to recover it. 00:36:05.903 [2024-05-16 09:48:59.451884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:05.903 [2024-05-16 09:48:59.451963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:05.903 [2024-05-16 09:48:59.451984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:05.903 [2024-05-16 09:48:59.451992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:05.903 [2024-05-16 09:48:59.451999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:05.903 [2024-05-16 09:48:59.452015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.903 qpair failed and we were unable to recover it. 00:36:06.185 [2024-05-16 09:48:59.461876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.185 [2024-05-16 09:48:59.461936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.185 [2024-05-16 09:48:59.461958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.185 [2024-05-16 09:48:59.461966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.185 [2024-05-16 09:48:59.461973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.185 [2024-05-16 09:48:59.461990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-05-16 09:48:59.471925] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.185 [2024-05-16 09:48:59.471989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.185 [2024-05-16 09:48:59.472010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.185 [2024-05-16 09:48:59.472018] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.185 [2024-05-16 09:48:59.472025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.185 [2024-05-16 09:48:59.472042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-05-16 09:48:59.481959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.185 [2024-05-16 09:48:59.482023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.185 [2024-05-16 09:48:59.482042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.185 [2024-05-16 09:48:59.482050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.185 [2024-05-16 09:48:59.482065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.185 [2024-05-16 09:48:59.482083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-05-16 09:48:59.491993] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.185 [2024-05-16 09:48:59.492071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.185 [2024-05-16 09:48:59.492091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.185 [2024-05-16 09:48:59.492100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.185 [2024-05-16 09:48:59.492106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.185 [2024-05-16 09:48:59.492123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-05-16 09:48:59.502017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.185 [2024-05-16 09:48:59.502089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.185 [2024-05-16 09:48:59.502110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.185 [2024-05-16 09:48:59.502118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.185 [2024-05-16 09:48:59.502125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.185 [2024-05-16 09:48:59.502144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-05-16 09:48:59.512044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.185 [2024-05-16 09:48:59.512114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.185 [2024-05-16 09:48:59.512139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.185 [2024-05-16 09:48:59.512147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.185 [2024-05-16 09:48:59.512154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.185 [2024-05-16 09:48:59.512173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-05-16 09:48:59.522068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.185 [2024-05-16 09:48:59.522130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.185 [2024-05-16 09:48:59.522150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.185 [2024-05-16 09:48:59.522159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.185 [2024-05-16 09:48:59.522165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.185 [2024-05-16 09:48:59.522183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-05-16 09:48:59.531991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.185 [2024-05-16 09:48:59.532078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.185 [2024-05-16 09:48:59.532099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.185 [2024-05-16 09:48:59.532107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.185 [2024-05-16 09:48:59.532114] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.185 [2024-05-16 09:48:59.532133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-05-16 09:48:59.542021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.185 [2024-05-16 09:48:59.542128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.185 [2024-05-16 09:48:59.542152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.185 [2024-05-16 09:48:59.542161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.185 [2024-05-16 09:48:59.542169] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.185 [2024-05-16 09:48:59.542188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-05-16 09:48:59.552030] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.185 [2024-05-16 09:48:59.552107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.185 [2024-05-16 09:48:59.552128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.185 [2024-05-16 09:48:59.552136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.185 [2024-05-16 09:48:59.552144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.185 [2024-05-16 09:48:59.552168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-05-16 09:48:59.562085] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.185 [2024-05-16 09:48:59.562187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.185 [2024-05-16 09:48:59.562209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.185 [2024-05-16 09:48:59.562217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.185 [2024-05-16 09:48:59.562224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.185 [2024-05-16 09:48:59.562240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-05-16 09:48:59.572223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.185 [2024-05-16 09:48:59.572307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.185 [2024-05-16 09:48:59.572327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.185 [2024-05-16 09:48:59.572335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.185 [2024-05-16 09:48:59.572343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.185 [2024-05-16 09:48:59.572361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.185 qpair failed and we were unable to recover it. 00:36:06.185 [2024-05-16 09:48:59.582221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.185 [2024-05-16 09:48:59.582293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.186 [2024-05-16 09:48:59.582313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.186 [2024-05-16 09:48:59.582322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.186 [2024-05-16 09:48:59.582328] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.186 [2024-05-16 09:48:59.582346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-05-16 09:48:59.592269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.186 [2024-05-16 09:48:59.592334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.186 [2024-05-16 09:48:59.592354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.186 [2024-05-16 09:48:59.592362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.186 [2024-05-16 09:48:59.592368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.186 [2024-05-16 09:48:59.592385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-05-16 09:48:59.602192] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.186 [2024-05-16 09:48:59.602258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.186 [2024-05-16 09:48:59.602283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.186 [2024-05-16 09:48:59.602291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.186 [2024-05-16 09:48:59.602298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.186 [2024-05-16 09:48:59.602315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-05-16 09:48:59.612378] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.186 [2024-05-16 09:48:59.612459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.186 [2024-05-16 09:48:59.612479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.186 [2024-05-16 09:48:59.612487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.186 [2024-05-16 09:48:59.612493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.186 [2024-05-16 09:48:59.612512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-05-16 09:48:59.622417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.186 [2024-05-16 09:48:59.622488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.186 [2024-05-16 09:48:59.622507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.186 [2024-05-16 09:48:59.622514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.186 [2024-05-16 09:48:59.622521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.186 [2024-05-16 09:48:59.622539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-05-16 09:48:59.632414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.186 [2024-05-16 09:48:59.632476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.186 [2024-05-16 09:48:59.632497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.186 [2024-05-16 09:48:59.632505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.186 [2024-05-16 09:48:59.632511] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.186 [2024-05-16 09:48:59.632528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-05-16 09:48:59.642313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.186 [2024-05-16 09:48:59.642380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.186 [2024-05-16 09:48:59.642399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.186 [2024-05-16 09:48:59.642407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.186 [2024-05-16 09:48:59.642414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.186 [2024-05-16 09:48:59.642441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-05-16 09:48:59.652511] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.186 [2024-05-16 09:48:59.652601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.186 [2024-05-16 09:48:59.652621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.186 [2024-05-16 09:48:59.652629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.186 [2024-05-16 09:48:59.652636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.186 [2024-05-16 09:48:59.652653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-05-16 09:48:59.662495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.186 [2024-05-16 09:48:59.662569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.186 [2024-05-16 09:48:59.662589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.186 [2024-05-16 09:48:59.662598] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.186 [2024-05-16 09:48:59.662605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.186 [2024-05-16 09:48:59.662623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-05-16 09:48:59.672529] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.186 [2024-05-16 09:48:59.672595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.186 [2024-05-16 09:48:59.672615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.186 [2024-05-16 09:48:59.672623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.186 [2024-05-16 09:48:59.672630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.186 [2024-05-16 09:48:59.672647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-05-16 09:48:59.682621] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.186 [2024-05-16 09:48:59.682690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.186 [2024-05-16 09:48:59.682716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.186 [2024-05-16 09:48:59.682725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.186 [2024-05-16 09:48:59.682732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.186 [2024-05-16 09:48:59.682751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-05-16 09:48:59.692592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.186 [2024-05-16 09:48:59.692668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.186 [2024-05-16 09:48:59.692695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.186 [2024-05-16 09:48:59.692703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.186 [2024-05-16 09:48:59.692710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.186 [2024-05-16 09:48:59.692729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-05-16 09:48:59.702640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.186 [2024-05-16 09:48:59.702714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.186 [2024-05-16 09:48:59.702735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.186 [2024-05-16 09:48:59.702743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.186 [2024-05-16 09:48:59.702750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.186 [2024-05-16 09:48:59.702768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-05-16 09:48:59.712635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.186 [2024-05-16 09:48:59.712694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.186 [2024-05-16 09:48:59.712714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.186 [2024-05-16 09:48:59.712722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.186 [2024-05-16 09:48:59.712729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.186 [2024-05-16 09:48:59.712746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.186 qpair failed and we were unable to recover it. 00:36:06.186 [2024-05-16 09:48:59.722690] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.187 [2024-05-16 09:48:59.722756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.187 [2024-05-16 09:48:59.722776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.187 [2024-05-16 09:48:59.722784] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.187 [2024-05-16 09:48:59.722792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.187 [2024-05-16 09:48:59.722808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.187 [2024-05-16 09:48:59.732727] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.187 [2024-05-16 09:48:59.732821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.187 [2024-05-16 09:48:59.732842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.187 [2024-05-16 09:48:59.732850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.187 [2024-05-16 09:48:59.732864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.187 [2024-05-16 09:48:59.732882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.187 qpair failed and we were unable to recover it. 00:36:06.490 [2024-05-16 09:48:59.742729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.490 [2024-05-16 09:48:59.742795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.490 [2024-05-16 09:48:59.742815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.490 [2024-05-16 09:48:59.742823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.490 [2024-05-16 09:48:59.742830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.490 [2024-05-16 09:48:59.742847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.490 qpair failed and we were unable to recover it. 00:36:06.490 [2024-05-16 09:48:59.752740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.490 [2024-05-16 09:48:59.752811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.490 [2024-05-16 09:48:59.752832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.490 [2024-05-16 09:48:59.752840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.490 [2024-05-16 09:48:59.752846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.490 [2024-05-16 09:48:59.752863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.490 qpair failed and we were unable to recover it. 00:36:06.490 [2024-05-16 09:48:59.762809] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.490 [2024-05-16 09:48:59.762871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.490 [2024-05-16 09:48:59.762891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.490 [2024-05-16 09:48:59.762900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.490 [2024-05-16 09:48:59.762906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.490 [2024-05-16 09:48:59.762923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.490 qpair failed and we were unable to recover it. 00:36:06.490 [2024-05-16 09:48:59.772845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.490 [2024-05-16 09:48:59.772922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.490 [2024-05-16 09:48:59.772942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.490 [2024-05-16 09:48:59.772950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.490 [2024-05-16 09:48:59.772958] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.490 [2024-05-16 09:48:59.772974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.490 qpair failed and we were unable to recover it. 00:36:06.490 [2024-05-16 09:48:59.782900] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.490 [2024-05-16 09:48:59.782967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.490 [2024-05-16 09:48:59.782987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.490 [2024-05-16 09:48:59.782995] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.490 [2024-05-16 09:48:59.783001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.490 [2024-05-16 09:48:59.783018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.490 qpair failed and we were unable to recover it. 00:36:06.490 [2024-05-16 09:48:59.792899] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.490 [2024-05-16 09:48:59.792964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.490 [2024-05-16 09:48:59.792983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.490 [2024-05-16 09:48:59.792991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.490 [2024-05-16 09:48:59.792998] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.490 [2024-05-16 09:48:59.793014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.490 qpair failed and we were unable to recover it. 00:36:06.490 [2024-05-16 09:48:59.802978] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.490 [2024-05-16 09:48:59.803041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.490 [2024-05-16 09:48:59.803068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.490 [2024-05-16 09:48:59.803077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.490 [2024-05-16 09:48:59.803084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.490 [2024-05-16 09:48:59.803101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.490 qpair failed and we were unable to recover it. 00:36:06.490 [2024-05-16 09:48:59.812943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.490 [2024-05-16 09:48:59.813021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.490 [2024-05-16 09:48:59.813041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.490 [2024-05-16 09:48:59.813050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.490 [2024-05-16 09:48:59.813064] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.490 [2024-05-16 09:48:59.813082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.490 qpair failed and we were unable to recover it. 00:36:06.490 [2024-05-16 09:48:59.822982] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.490 [2024-05-16 09:48:59.823056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.490 [2024-05-16 09:48:59.823077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.490 [2024-05-16 09:48:59.823091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.490 [2024-05-16 09:48:59.823099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.490 [2024-05-16 09:48:59.823116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.490 qpair failed and we were unable to recover it. 00:36:06.490 [2024-05-16 09:48:59.833035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.490 [2024-05-16 09:48:59.833126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.490 [2024-05-16 09:48:59.833148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.490 [2024-05-16 09:48:59.833156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.490 [2024-05-16 09:48:59.833164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.490 [2024-05-16 09:48:59.833180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.490 qpair failed and we were unable to recover it. 00:36:06.490 [2024-05-16 09:48:59.843082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.490 [2024-05-16 09:48:59.843144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.490 [2024-05-16 09:48:59.843163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.490 [2024-05-16 09:48:59.843172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.490 [2024-05-16 09:48:59.843179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.490 [2024-05-16 09:48:59.843195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.491 qpair failed and we were unable to recover it. 00:36:06.491 [2024-05-16 09:48:59.853132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.491 [2024-05-16 09:48:59.853208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.491 [2024-05-16 09:48:59.853228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.491 [2024-05-16 09:48:59.853236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.491 [2024-05-16 09:48:59.853242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.491 [2024-05-16 09:48:59.853260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.491 qpair failed and we were unable to recover it. 00:36:06.491 [2024-05-16 09:48:59.863124] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.491 [2024-05-16 09:48:59.863189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.491 [2024-05-16 09:48:59.863209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.491 [2024-05-16 09:48:59.863216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.491 [2024-05-16 09:48:59.863222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.491 [2024-05-16 09:48:59.863240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.491 qpair failed and we were unable to recover it. 00:36:06.491 [2024-05-16 09:48:59.873183] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.491 [2024-05-16 09:48:59.873250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.491 [2024-05-16 09:48:59.873271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.491 [2024-05-16 09:48:59.873279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.491 [2024-05-16 09:48:59.873286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.491 [2024-05-16 09:48:59.873304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.491 qpair failed and we were unable to recover it. 00:36:06.491 [2024-05-16 09:48:59.883205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.491 [2024-05-16 09:48:59.883269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.491 [2024-05-16 09:48:59.883289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.491 [2024-05-16 09:48:59.883298] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.491 [2024-05-16 09:48:59.883306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.491 [2024-05-16 09:48:59.883322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.491 qpair failed and we were unable to recover it. 00:36:06.491 [2024-05-16 09:48:59.893257] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.491 [2024-05-16 09:48:59.893341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.491 [2024-05-16 09:48:59.893362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.491 [2024-05-16 09:48:59.893371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.491 [2024-05-16 09:48:59.893379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.491 [2024-05-16 09:48:59.893397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.491 qpair failed and we were unable to recover it. 00:36:06.491 [2024-05-16 09:48:59.903257] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.491 [2024-05-16 09:48:59.903315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.491 [2024-05-16 09:48:59.903335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.491 [2024-05-16 09:48:59.903344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.491 [2024-05-16 09:48:59.903351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.491 [2024-05-16 09:48:59.903368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.491 qpair failed and we were unable to recover it. 00:36:06.491 [2024-05-16 09:48:59.913297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.491 [2024-05-16 09:48:59.913398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.491 [2024-05-16 09:48:59.913419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.491 [2024-05-16 09:48:59.913433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.491 [2024-05-16 09:48:59.913440] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.491 [2024-05-16 09:48:59.913457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.491 qpair failed and we were unable to recover it. 00:36:06.491 [2024-05-16 09:48:59.923310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.491 [2024-05-16 09:48:59.923369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.491 [2024-05-16 09:48:59.923391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.491 [2024-05-16 09:48:59.923398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.491 [2024-05-16 09:48:59.923405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.491 [2024-05-16 09:48:59.923423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.491 qpair failed and we were unable to recover it. 00:36:06.491 [2024-05-16 09:48:59.933377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.491 [2024-05-16 09:48:59.933452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.491 [2024-05-16 09:48:59.933472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.491 [2024-05-16 09:48:59.933480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.491 [2024-05-16 09:48:59.933487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.491 [2024-05-16 09:48:59.933503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.491 qpair failed and we were unable to recover it. 00:36:06.491 [2024-05-16 09:48:59.943383] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.491 [2024-05-16 09:48:59.943449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.491 [2024-05-16 09:48:59.943470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.491 [2024-05-16 09:48:59.943478] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.491 [2024-05-16 09:48:59.943486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.491 [2024-05-16 09:48:59.943504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.491 qpair failed and we were unable to recover it. 00:36:06.491 [2024-05-16 09:48:59.953404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.491 [2024-05-16 09:48:59.953475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.491 [2024-05-16 09:48:59.953496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.491 [2024-05-16 09:48:59.953503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.491 [2024-05-16 09:48:59.953510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.491 [2024-05-16 09:48:59.953527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.491 qpair failed and we were unable to recover it. 00:36:06.491 [2024-05-16 09:48:59.963461] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.491 [2024-05-16 09:48:59.963526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.491 [2024-05-16 09:48:59.963547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.491 [2024-05-16 09:48:59.963555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.491 [2024-05-16 09:48:59.963561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.491 [2024-05-16 09:48:59.963581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.491 qpair failed and we were unable to recover it. 00:36:06.491 [2024-05-16 09:48:59.973443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.491 [2024-05-16 09:48:59.973517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.491 [2024-05-16 09:48:59.973537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.491 [2024-05-16 09:48:59.973546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.491 [2024-05-16 09:48:59.973553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.491 [2024-05-16 09:48:59.973570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.491 qpair failed and we were unable to recover it. 00:36:06.491 [2024-05-16 09:48:59.983510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.491 [2024-05-16 09:48:59.983606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.491 [2024-05-16 09:48:59.983627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.491 [2024-05-16 09:48:59.983635] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.492 [2024-05-16 09:48:59.983642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.492 [2024-05-16 09:48:59.983660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.492 qpair failed and we were unable to recover it. 00:36:06.492 [2024-05-16 09:48:59.993505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.492 [2024-05-16 09:48:59.993571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.492 [2024-05-16 09:48:59.993591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.492 [2024-05-16 09:48:59.993600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.492 [2024-05-16 09:48:59.993607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.492 [2024-05-16 09:48:59.993624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.492 qpair failed and we were unable to recover it. 00:36:06.492 [2024-05-16 09:49:00.003581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.492 [2024-05-16 09:49:00.003687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.492 [2024-05-16 09:49:00.003715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.492 [2024-05-16 09:49:00.003724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.492 [2024-05-16 09:49:00.003731] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.492 [2024-05-16 09:49:00.003749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.492 qpair failed and we were unable to recover it. 00:36:06.492 [2024-05-16 09:49:00.013590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.492 [2024-05-16 09:49:00.013672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.492 [2024-05-16 09:49:00.013694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.492 [2024-05-16 09:49:00.013703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.492 [2024-05-16 09:49:00.013710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.492 [2024-05-16 09:49:00.013730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.492 qpair failed and we were unable to recover it. 00:36:06.492 [2024-05-16 09:49:00.023656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.492 [2024-05-16 09:49:00.023722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.492 [2024-05-16 09:49:00.023743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.492 [2024-05-16 09:49:00.023751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.492 [2024-05-16 09:49:00.023758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.492 [2024-05-16 09:49:00.023775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.492 qpair failed and we were unable to recover it. 00:36:06.492 [2024-05-16 09:49:00.033652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.492 [2024-05-16 09:49:00.033724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.492 [2024-05-16 09:49:00.033745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.492 [2024-05-16 09:49:00.033753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.492 [2024-05-16 09:49:00.033760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.492 [2024-05-16 09:49:00.033779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.492 qpair failed and we were unable to recover it. 00:36:06.831 [2024-05-16 09:49:00.043710] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.831 [2024-05-16 09:49:00.043779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.831 [2024-05-16 09:49:00.043800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.831 [2024-05-16 09:49:00.043812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.831 [2024-05-16 09:49:00.043821] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.831 [2024-05-16 09:49:00.043845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.831 qpair failed and we were unable to recover it. 00:36:06.831 [2024-05-16 09:49:00.053724] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.831 [2024-05-16 09:49:00.053800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.831 [2024-05-16 09:49:00.053821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.831 [2024-05-16 09:49:00.053830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.831 [2024-05-16 09:49:00.053838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.831 [2024-05-16 09:49:00.053856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.831 qpair failed and we were unable to recover it. 00:36:06.831 [2024-05-16 09:49:00.063769] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.831 [2024-05-16 09:49:00.063838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.831 [2024-05-16 09:49:00.063873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.831 [2024-05-16 09:49:00.063884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.831 [2024-05-16 09:49:00.063892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.831 [2024-05-16 09:49:00.063914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.831 qpair failed and we were unable to recover it. 00:36:06.831 [2024-05-16 09:49:00.073770] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.832 [2024-05-16 09:49:00.073837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.832 [2024-05-16 09:49:00.073860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.832 [2024-05-16 09:49:00.073870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.832 [2024-05-16 09:49:00.073877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.832 [2024-05-16 09:49:00.073896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.832 qpair failed and we were unable to recover it. 00:36:06.832 [2024-05-16 09:49:00.083835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.832 [2024-05-16 09:49:00.083901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.832 [2024-05-16 09:49:00.083924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.832 [2024-05-16 09:49:00.083933] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.832 [2024-05-16 09:49:00.083941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.832 [2024-05-16 09:49:00.083961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.832 qpair failed and we were unable to recover it. 00:36:06.832 [2024-05-16 09:49:00.093874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.832 [2024-05-16 09:49:00.093951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.832 [2024-05-16 09:49:00.093980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.832 [2024-05-16 09:49:00.093989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.832 [2024-05-16 09:49:00.093997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.832 [2024-05-16 09:49:00.094016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.832 qpair failed and we were unable to recover it. 00:36:06.832 [2024-05-16 09:49:00.103864] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.832 [2024-05-16 09:49:00.103928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.832 [2024-05-16 09:49:00.103950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.832 [2024-05-16 09:49:00.103959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.832 [2024-05-16 09:49:00.103966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.832 [2024-05-16 09:49:00.103982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.832 qpair failed and we were unable to recover it. 00:36:06.832 [2024-05-16 09:49:00.113907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.832 [2024-05-16 09:49:00.113973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.832 [2024-05-16 09:49:00.113993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.832 [2024-05-16 09:49:00.114001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.832 [2024-05-16 09:49:00.114007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.832 [2024-05-16 09:49:00.114024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.832 qpair failed and we were unable to recover it. 00:36:06.832 [2024-05-16 09:49:00.123954] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.832 [2024-05-16 09:49:00.124020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.832 [2024-05-16 09:49:00.124039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.832 [2024-05-16 09:49:00.124047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.832 [2024-05-16 09:49:00.124077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.832 [2024-05-16 09:49:00.124095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.832 qpair failed and we were unable to recover it. 00:36:06.832 [2024-05-16 09:49:00.133876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.832 [2024-05-16 09:49:00.133942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.832 [2024-05-16 09:49:00.133962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.832 [2024-05-16 09:49:00.133970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.832 [2024-05-16 09:49:00.133984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.832 [2024-05-16 09:49:00.134001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.832 qpair failed and we were unable to recover it. 00:36:06.832 [2024-05-16 09:49:00.143980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.832 [2024-05-16 09:49:00.144069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.832 [2024-05-16 09:49:00.144090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.832 [2024-05-16 09:49:00.144098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.832 [2024-05-16 09:49:00.144106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.832 [2024-05-16 09:49:00.144123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.832 qpair failed and we were unable to recover it. 00:36:06.832 [2024-05-16 09:49:00.154016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.832 [2024-05-16 09:49:00.154108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.832 [2024-05-16 09:49:00.154129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.832 [2024-05-16 09:49:00.154139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.832 [2024-05-16 09:49:00.154147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.832 [2024-05-16 09:49:00.154165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.832 qpair failed and we were unable to recover it. 00:36:06.832 [2024-05-16 09:49:00.164064] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.832 [2024-05-16 09:49:00.164130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.832 [2024-05-16 09:49:00.164150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.832 [2024-05-16 09:49:00.164158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.832 [2024-05-16 09:49:00.164165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.832 [2024-05-16 09:49:00.164182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.832 qpair failed and we were unable to recover it. 00:36:06.832 [2024-05-16 09:49:00.174100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.832 [2024-05-16 09:49:00.174175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.832 [2024-05-16 09:49:00.174195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.832 [2024-05-16 09:49:00.174203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.832 [2024-05-16 09:49:00.174210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.832 [2024-05-16 09:49:00.174227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.832 qpair failed and we were unable to recover it. 00:36:06.832 [2024-05-16 09:49:00.184119] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.832 [2024-05-16 09:49:00.184193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.832 [2024-05-16 09:49:00.184213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.832 [2024-05-16 09:49:00.184221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.832 [2024-05-16 09:49:00.184228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.832 [2024-05-16 09:49:00.184245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.832 qpair failed and we were unable to recover it. 00:36:06.832 [2024-05-16 09:49:00.194138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.832 [2024-05-16 09:49:00.194212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.832 [2024-05-16 09:49:00.194232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.832 [2024-05-16 09:49:00.194240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.832 [2024-05-16 09:49:00.194247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.832 [2024-05-16 09:49:00.194264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.832 qpair failed and we were unable to recover it. 00:36:06.832 [2024-05-16 09:49:00.204175] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.832 [2024-05-16 09:49:00.204257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.832 [2024-05-16 09:49:00.204277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.832 [2024-05-16 09:49:00.204286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.832 [2024-05-16 09:49:00.204292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.833 [2024-05-16 09:49:00.204309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.833 qpair failed and we were unable to recover it. 00:36:06.833 [2024-05-16 09:49:00.214177] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.833 [2024-05-16 09:49:00.214259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.833 [2024-05-16 09:49:00.214279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.833 [2024-05-16 09:49:00.214287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.833 [2024-05-16 09:49:00.214294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.833 [2024-05-16 09:49:00.214313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.833 qpair failed and we were unable to recover it. 00:36:06.833 [2024-05-16 09:49:00.224232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.833 [2024-05-16 09:49:00.224331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.833 [2024-05-16 09:49:00.224352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.833 [2024-05-16 09:49:00.224367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.833 [2024-05-16 09:49:00.224374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.833 [2024-05-16 09:49:00.224392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.833 qpair failed and we were unable to recover it. 00:36:06.833 [2024-05-16 09:49:00.234154] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.833 [2024-05-16 09:49:00.234224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.833 [2024-05-16 09:49:00.234247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.833 [2024-05-16 09:49:00.234255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.833 [2024-05-16 09:49:00.234263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.833 [2024-05-16 09:49:00.234283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.833 qpair failed and we were unable to recover it. 00:36:06.833 [2024-05-16 09:49:00.244205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.833 [2024-05-16 09:49:00.244282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.833 [2024-05-16 09:49:00.244304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.833 [2024-05-16 09:49:00.244312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.833 [2024-05-16 09:49:00.244320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.833 [2024-05-16 09:49:00.244338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.833 qpair failed and we were unable to recover it. 00:36:06.833 [2024-05-16 09:49:00.254342] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.833 [2024-05-16 09:49:00.254420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.833 [2024-05-16 09:49:00.254439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.833 [2024-05-16 09:49:00.254447] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.833 [2024-05-16 09:49:00.254455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.833 [2024-05-16 09:49:00.254471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.833 qpair failed and we were unable to recover it. 00:36:06.833 [2024-05-16 09:49:00.264390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.833 [2024-05-16 09:49:00.264450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.833 [2024-05-16 09:49:00.264470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.833 [2024-05-16 09:49:00.264478] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.833 [2024-05-16 09:49:00.264485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.833 [2024-05-16 09:49:00.264502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.833 qpair failed and we were unable to recover it. 00:36:06.833 [2024-05-16 09:49:00.274423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.833 [2024-05-16 09:49:00.274493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.833 [2024-05-16 09:49:00.274515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.833 [2024-05-16 09:49:00.274523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.833 [2024-05-16 09:49:00.274530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.833 [2024-05-16 09:49:00.274547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.833 qpair failed and we were unable to recover it. 00:36:06.833 [2024-05-16 09:49:00.284413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.833 [2024-05-16 09:49:00.284476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.833 [2024-05-16 09:49:00.284496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.833 [2024-05-16 09:49:00.284503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.833 [2024-05-16 09:49:00.284510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.833 [2024-05-16 09:49:00.284527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.833 qpair failed and we were unable to recover it. 00:36:06.833 [2024-05-16 09:49:00.294468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.833 [2024-05-16 09:49:00.294550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.833 [2024-05-16 09:49:00.294570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.833 [2024-05-16 09:49:00.294578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.833 [2024-05-16 09:49:00.294585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.833 [2024-05-16 09:49:00.294603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.833 qpair failed and we were unable to recover it. 00:36:06.833 [2024-05-16 09:49:00.304476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.833 [2024-05-16 09:49:00.304543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.833 [2024-05-16 09:49:00.304563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.833 [2024-05-16 09:49:00.304571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.833 [2024-05-16 09:49:00.304579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.833 [2024-05-16 09:49:00.304596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.833 qpair failed and we were unable to recover it. 00:36:06.833 [2024-05-16 09:49:00.314505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.833 [2024-05-16 09:49:00.314567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.833 [2024-05-16 09:49:00.314587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.833 [2024-05-16 09:49:00.314601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.833 [2024-05-16 09:49:00.314608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.833 [2024-05-16 09:49:00.314625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.833 qpair failed and we were unable to recover it. 00:36:06.833 [2024-05-16 09:49:00.324549] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.833 [2024-05-16 09:49:00.324620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.833 [2024-05-16 09:49:00.324641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.833 [2024-05-16 09:49:00.324649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.833 [2024-05-16 09:49:00.324656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.833 [2024-05-16 09:49:00.324676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.833 qpair failed and we were unable to recover it. 00:36:06.833 [2024-05-16 09:49:00.334566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.833 [2024-05-16 09:49:00.334661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.833 [2024-05-16 09:49:00.334681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.833 [2024-05-16 09:49:00.334691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.833 [2024-05-16 09:49:00.334698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.833 [2024-05-16 09:49:00.334715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.833 qpair failed and we were unable to recover it. 00:36:06.833 [2024-05-16 09:49:00.344594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.833 [2024-05-16 09:49:00.344663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.834 [2024-05-16 09:49:00.344683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.834 [2024-05-16 09:49:00.344691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.834 [2024-05-16 09:49:00.344697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.834 [2024-05-16 09:49:00.344715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.834 qpair failed and we were unable to recover it. 00:36:06.834 [2024-05-16 09:49:00.354624] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.834 [2024-05-16 09:49:00.354690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.834 [2024-05-16 09:49:00.354710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.834 [2024-05-16 09:49:00.354719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.834 [2024-05-16 09:49:00.354726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.834 [2024-05-16 09:49:00.354743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.834 qpair failed and we were unable to recover it. 00:36:06.834 [2024-05-16 09:49:00.364685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.834 [2024-05-16 09:49:00.364753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.834 [2024-05-16 09:49:00.364780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.834 [2024-05-16 09:49:00.364789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.834 [2024-05-16 09:49:00.364796] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.834 [2024-05-16 09:49:00.364818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.834 qpair failed and we were unable to recover it. 00:36:06.834 [2024-05-16 09:49:00.374698] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.834 [2024-05-16 09:49:00.374782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.834 [2024-05-16 09:49:00.374802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.834 [2024-05-16 09:49:00.374810] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.834 [2024-05-16 09:49:00.374817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.834 [2024-05-16 09:49:00.374835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.834 qpair failed and we were unable to recover it. 00:36:06.834 [2024-05-16 09:49:00.384750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:06.834 [2024-05-16 09:49:00.384813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:06.834 [2024-05-16 09:49:00.384833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:06.834 [2024-05-16 09:49:00.384841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:06.834 [2024-05-16 09:49:00.384847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:06.834 [2024-05-16 09:49:00.384865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:06.834 qpair failed and we were unable to recover it. 00:36:07.097 [2024-05-16 09:49:00.394774] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.097 [2024-05-16 09:49:00.394837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.097 [2024-05-16 09:49:00.394858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.097 [2024-05-16 09:49:00.394865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.097 [2024-05-16 09:49:00.394872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.097 [2024-05-16 09:49:00.394888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.097 qpair failed and we were unable to recover it. 00:36:07.097 [2024-05-16 09:49:00.404809] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.097 [2024-05-16 09:49:00.404893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.097 [2024-05-16 09:49:00.404931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.097 [2024-05-16 09:49:00.404939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.097 [2024-05-16 09:49:00.404946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.097 [2024-05-16 09:49:00.404964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.097 qpair failed and we were unable to recover it. 00:36:07.097 [2024-05-16 09:49:00.414816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.097 [2024-05-16 09:49:00.414939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.097 [2024-05-16 09:49:00.414961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.097 [2024-05-16 09:49:00.414970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.097 [2024-05-16 09:49:00.414976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.097 [2024-05-16 09:49:00.414994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.097 qpair failed and we were unable to recover it. 00:36:07.097 [2024-05-16 09:49:00.424854] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.097 [2024-05-16 09:49:00.424929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.097 [2024-05-16 09:49:00.424950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.097 [2024-05-16 09:49:00.424957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.097 [2024-05-16 09:49:00.424964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.097 [2024-05-16 09:49:00.424983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.097 qpair failed and we were unable to recover it. 00:36:07.097 [2024-05-16 09:49:00.434871] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.097 [2024-05-16 09:49:00.434936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.097 [2024-05-16 09:49:00.434956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.097 [2024-05-16 09:49:00.434964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.097 [2024-05-16 09:49:00.434970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.097 [2024-05-16 09:49:00.434987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.097 qpair failed and we were unable to recover it. 00:36:07.097 [2024-05-16 09:49:00.444938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.097 [2024-05-16 09:49:00.445005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.097 [2024-05-16 09:49:00.445024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.097 [2024-05-16 09:49:00.445032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.097 [2024-05-16 09:49:00.445039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.097 [2024-05-16 09:49:00.445070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.097 qpair failed and we were unable to recover it. 00:36:07.097 [2024-05-16 09:49:00.454966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.097 [2024-05-16 09:49:00.455051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.097 [2024-05-16 09:49:00.455078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.097 [2024-05-16 09:49:00.455086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.097 [2024-05-16 09:49:00.455094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.097 [2024-05-16 09:49:00.455110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.097 qpair failed and we were unable to recover it. 00:36:07.097 [2024-05-16 09:49:00.464981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.097 [2024-05-16 09:49:00.465049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.097 [2024-05-16 09:49:00.465076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.097 [2024-05-16 09:49:00.465085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.097 [2024-05-16 09:49:00.465092] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.097 [2024-05-16 09:49:00.465110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.097 qpair failed and we were unable to recover it. 00:36:07.097 [2024-05-16 09:49:00.474973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.097 [2024-05-16 09:49:00.475034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.097 [2024-05-16 09:49:00.475062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.097 [2024-05-16 09:49:00.475071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.097 [2024-05-16 09:49:00.475078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.097 [2024-05-16 09:49:00.475094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.097 qpair failed and we were unable to recover it. 00:36:07.097 [2024-05-16 09:49:00.485048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.097 [2024-05-16 09:49:00.485121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.097 [2024-05-16 09:49:00.485140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.097 [2024-05-16 09:49:00.485148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.097 [2024-05-16 09:49:00.485155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.097 [2024-05-16 09:49:00.485172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.097 qpair failed and we were unable to recover it. 00:36:07.097 [2024-05-16 09:49:00.495083] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.097 [2024-05-16 09:49:00.495168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.097 [2024-05-16 09:49:00.495193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.097 [2024-05-16 09:49:00.495201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.097 [2024-05-16 09:49:00.495207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.097 [2024-05-16 09:49:00.495226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.097 qpair failed and we were unable to recover it. 00:36:07.097 [2024-05-16 09:49:00.505112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.097 [2024-05-16 09:49:00.505177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.097 [2024-05-16 09:49:00.505197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.097 [2024-05-16 09:49:00.505205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.097 [2024-05-16 09:49:00.505211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.097 [2024-05-16 09:49:00.505228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.097 qpair failed and we were unable to recover it. 00:36:07.097 [2024-05-16 09:49:00.515149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.097 [2024-05-16 09:49:00.515219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.097 [2024-05-16 09:49:00.515238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.097 [2024-05-16 09:49:00.515247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.098 [2024-05-16 09:49:00.515253] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.098 [2024-05-16 09:49:00.515270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.098 qpair failed and we were unable to recover it. 00:36:07.098 [2024-05-16 09:49:00.525166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.098 [2024-05-16 09:49:00.525243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.098 [2024-05-16 09:49:00.525264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.098 [2024-05-16 09:49:00.525272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.098 [2024-05-16 09:49:00.525278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.098 [2024-05-16 09:49:00.525297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.098 qpair failed and we were unable to recover it. 00:36:07.098 [2024-05-16 09:49:00.535225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.098 [2024-05-16 09:49:00.535291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.098 [2024-05-16 09:49:00.535311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.098 [2024-05-16 09:49:00.535320] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.098 [2024-05-16 09:49:00.535332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.098 [2024-05-16 09:49:00.535349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.098 qpair failed and we were unable to recover it. 00:36:07.098 [2024-05-16 09:49:00.545144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.098 [2024-05-16 09:49:00.545213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.098 [2024-05-16 09:49:00.545233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.098 [2024-05-16 09:49:00.545240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.098 [2024-05-16 09:49:00.545248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.098 [2024-05-16 09:49:00.545265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.098 qpair failed and we were unable to recover it. 00:36:07.098 [2024-05-16 09:49:00.555254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.098 [2024-05-16 09:49:00.555324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.098 [2024-05-16 09:49:00.555344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.098 [2024-05-16 09:49:00.555352] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.098 [2024-05-16 09:49:00.555358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.098 [2024-05-16 09:49:00.555375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.098 qpair failed and we were unable to recover it. 00:36:07.098 [2024-05-16 09:49:00.565326] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.098 [2024-05-16 09:49:00.565393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.098 [2024-05-16 09:49:00.565413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.098 [2024-05-16 09:49:00.565421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.098 [2024-05-16 09:49:00.565428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.098 [2024-05-16 09:49:00.565444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.098 qpair failed and we were unable to recover it. 00:36:07.098 [2024-05-16 09:49:00.575336] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.098 [2024-05-16 09:49:00.575413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.098 [2024-05-16 09:49:00.575434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.098 [2024-05-16 09:49:00.575442] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.098 [2024-05-16 09:49:00.575451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.098 [2024-05-16 09:49:00.575470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.098 qpair failed and we were unable to recover it. 00:36:07.098 [2024-05-16 09:49:00.585361] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.098 [2024-05-16 09:49:00.585429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.098 [2024-05-16 09:49:00.585451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.098 [2024-05-16 09:49:00.585459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.098 [2024-05-16 09:49:00.585466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.098 [2024-05-16 09:49:00.585483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.098 qpair failed and we were unable to recover it. 00:36:07.098 [2024-05-16 09:49:00.595370] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.098 [2024-05-16 09:49:00.595434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.098 [2024-05-16 09:49:00.595454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.098 [2024-05-16 09:49:00.595462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.098 [2024-05-16 09:49:00.595469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.098 [2024-05-16 09:49:00.595485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.098 qpair failed and we were unable to recover it. 00:36:07.098 [2024-05-16 09:49:00.605387] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.098 [2024-05-16 09:49:00.605447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.098 [2024-05-16 09:49:00.605466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.098 [2024-05-16 09:49:00.605475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.098 [2024-05-16 09:49:00.605482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.098 [2024-05-16 09:49:00.605499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.098 qpair failed and we were unable to recover it. 00:36:07.098 [2024-05-16 09:49:00.615474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.098 [2024-05-16 09:49:00.615548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.098 [2024-05-16 09:49:00.615568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.098 [2024-05-16 09:49:00.615576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.098 [2024-05-16 09:49:00.615583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.098 [2024-05-16 09:49:00.615599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.098 qpair failed and we were unable to recover it. 00:36:07.098 [2024-05-16 09:49:00.625438] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.098 [2024-05-16 09:49:00.625509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.098 [2024-05-16 09:49:00.625529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.098 [2024-05-16 09:49:00.625537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.098 [2024-05-16 09:49:00.625551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.098 [2024-05-16 09:49:00.625568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.098 qpair failed and we were unable to recover it. 00:36:07.098 [2024-05-16 09:49:00.635501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.098 [2024-05-16 09:49:00.635569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.098 [2024-05-16 09:49:00.635589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.098 [2024-05-16 09:49:00.635598] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.098 [2024-05-16 09:49:00.635604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.098 [2024-05-16 09:49:00.635620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.098 qpair failed and we were unable to recover it. 00:36:07.098 [2024-05-16 09:49:00.645537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.098 [2024-05-16 09:49:00.645606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.098 [2024-05-16 09:49:00.645626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.098 [2024-05-16 09:49:00.645635] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.098 [2024-05-16 09:49:00.645642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.098 [2024-05-16 09:49:00.645659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.098 qpair failed and we were unable to recover it. 00:36:07.361 [2024-05-16 09:49:00.655547] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.361 [2024-05-16 09:49:00.655630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.361 [2024-05-16 09:49:00.655650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.361 [2024-05-16 09:49:00.655659] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.361 [2024-05-16 09:49:00.655666] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.361 [2024-05-16 09:49:00.655685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.361 qpair failed and we were unable to recover it. 00:36:07.361 [2024-05-16 09:49:00.665581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.361 [2024-05-16 09:49:00.665648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.361 [2024-05-16 09:49:00.665667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.361 [2024-05-16 09:49:00.665676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.361 [2024-05-16 09:49:00.665683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.361 [2024-05-16 09:49:00.665698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.361 qpair failed and we were unable to recover it. 00:36:07.361 [2024-05-16 09:49:00.675632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.361 [2024-05-16 09:49:00.675696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.361 [2024-05-16 09:49:00.675716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.361 [2024-05-16 09:49:00.675724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.361 [2024-05-16 09:49:00.675731] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.361 [2024-05-16 09:49:00.675747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.361 qpair failed and we were unable to recover it. 00:36:07.362 [2024-05-16 09:49:00.685681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.362 [2024-05-16 09:49:00.685753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.362 [2024-05-16 09:49:00.685773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.362 [2024-05-16 09:49:00.685781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.362 [2024-05-16 09:49:00.685789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.362 [2024-05-16 09:49:00.685807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.362 qpair failed and we were unable to recover it. 00:36:07.362 [2024-05-16 09:49:00.695707] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.362 [2024-05-16 09:49:00.695793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.362 [2024-05-16 09:49:00.695831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.362 [2024-05-16 09:49:00.695841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.362 [2024-05-16 09:49:00.695850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.362 [2024-05-16 09:49:00.695873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.362 qpair failed and we were unable to recover it. 00:36:07.362 [2024-05-16 09:49:00.705729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.362 [2024-05-16 09:49:00.705808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.362 [2024-05-16 09:49:00.705844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.362 [2024-05-16 09:49:00.705854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.362 [2024-05-16 09:49:00.705861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.362 [2024-05-16 09:49:00.705883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.362 qpair failed and we were unable to recover it. 00:36:07.362 [2024-05-16 09:49:00.715715] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.362 [2024-05-16 09:49:00.715792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.362 [2024-05-16 09:49:00.715830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.362 [2024-05-16 09:49:00.715847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.362 [2024-05-16 09:49:00.715855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.362 [2024-05-16 09:49:00.715878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.362 qpair failed and we were unable to recover it. 00:36:07.362 [2024-05-16 09:49:00.725788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.362 [2024-05-16 09:49:00.725864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.362 [2024-05-16 09:49:00.725888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.362 [2024-05-16 09:49:00.725896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.362 [2024-05-16 09:49:00.725905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.362 [2024-05-16 09:49:00.725923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.362 qpair failed and we were unable to recover it. 00:36:07.362 [2024-05-16 09:49:00.735832] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.362 [2024-05-16 09:49:00.735907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.362 [2024-05-16 09:49:00.735927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.362 [2024-05-16 09:49:00.735936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.362 [2024-05-16 09:49:00.735943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.362 [2024-05-16 09:49:00.735960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.362 qpair failed and we were unable to recover it. 00:36:07.362 [2024-05-16 09:49:00.745857] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.362 [2024-05-16 09:49:00.745917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.362 [2024-05-16 09:49:00.745938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.362 [2024-05-16 09:49:00.745946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.362 [2024-05-16 09:49:00.745953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.362 [2024-05-16 09:49:00.745970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.362 qpair failed and we were unable to recover it. 00:36:07.362 [2024-05-16 09:49:00.755914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.362 [2024-05-16 09:49:00.755976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.362 [2024-05-16 09:49:00.755997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.362 [2024-05-16 09:49:00.756005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.362 [2024-05-16 09:49:00.756012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.362 [2024-05-16 09:49:00.756028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.362 qpair failed and we were unable to recover it. 00:36:07.362 [2024-05-16 09:49:00.765916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.362 [2024-05-16 09:49:00.765992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.362 [2024-05-16 09:49:00.766012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.362 [2024-05-16 09:49:00.766020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.362 [2024-05-16 09:49:00.766027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.362 [2024-05-16 09:49:00.766044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.362 qpair failed and we were unable to recover it. 00:36:07.362 [2024-05-16 09:49:00.775972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.362 [2024-05-16 09:49:00.776064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.362 [2024-05-16 09:49:00.776084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.362 [2024-05-16 09:49:00.776092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.362 [2024-05-16 09:49:00.776099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.362 [2024-05-16 09:49:00.776116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.362 qpair failed and we were unable to recover it. 00:36:07.362 [2024-05-16 09:49:00.785970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.362 [2024-05-16 09:49:00.786036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.362 [2024-05-16 09:49:00.786065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.362 [2024-05-16 09:49:00.786073] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.362 [2024-05-16 09:49:00.786080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.362 [2024-05-16 09:49:00.786098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.362 qpair failed and we were unable to recover it. 00:36:07.362 [2024-05-16 09:49:00.796008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.362 [2024-05-16 09:49:00.796091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.362 [2024-05-16 09:49:00.796112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.362 [2024-05-16 09:49:00.796120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.362 [2024-05-16 09:49:00.796129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.362 [2024-05-16 09:49:00.796145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.362 qpair failed and we were unable to recover it. 00:36:07.362 [2024-05-16 09:49:00.806034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.362 [2024-05-16 09:49:00.806112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.362 [2024-05-16 09:49:00.806137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.362 [2024-05-16 09:49:00.806145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.362 [2024-05-16 09:49:00.806152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.362 [2024-05-16 09:49:00.806171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.362 qpair failed and we were unable to recover it. 00:36:07.362 [2024-05-16 09:49:00.816088] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.362 [2024-05-16 09:49:00.816169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.362 [2024-05-16 09:49:00.816188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.362 [2024-05-16 09:49:00.816196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.362 [2024-05-16 09:49:00.816203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.363 [2024-05-16 09:49:00.816220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.363 qpair failed and we were unable to recover it. 00:36:07.363 [2024-05-16 09:49:00.826083] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.363 [2024-05-16 09:49:00.826141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.363 [2024-05-16 09:49:00.826161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.363 [2024-05-16 09:49:00.826169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.363 [2024-05-16 09:49:00.826176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.363 [2024-05-16 09:49:00.826193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.363 qpair failed and we were unable to recover it. 00:36:07.363 [2024-05-16 09:49:00.836091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.363 [2024-05-16 09:49:00.836158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.363 [2024-05-16 09:49:00.836179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.363 [2024-05-16 09:49:00.836187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.363 [2024-05-16 09:49:00.836194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.363 [2024-05-16 09:49:00.836212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.363 qpair failed and we were unable to recover it. 00:36:07.363 [2024-05-16 09:49:00.846159] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.363 [2024-05-16 09:49:00.846240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.363 [2024-05-16 09:49:00.846259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.363 [2024-05-16 09:49:00.846268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.363 [2024-05-16 09:49:00.846276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.363 [2024-05-16 09:49:00.846299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.363 qpair failed and we were unable to recover it. 00:36:07.363 [2024-05-16 09:49:00.856200] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.363 [2024-05-16 09:49:00.856283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.363 [2024-05-16 09:49:00.856303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.363 [2024-05-16 09:49:00.856311] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.363 [2024-05-16 09:49:00.856318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.363 [2024-05-16 09:49:00.856336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.363 qpair failed and we were unable to recover it. 00:36:07.363 [2024-05-16 09:49:00.866213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.363 [2024-05-16 09:49:00.866291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.363 [2024-05-16 09:49:00.866310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.363 [2024-05-16 09:49:00.866319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.363 [2024-05-16 09:49:00.866327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.363 [2024-05-16 09:49:00.866344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.363 qpair failed and we were unable to recover it. 00:36:07.363 [2024-05-16 09:49:00.876211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.363 [2024-05-16 09:49:00.876276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.363 [2024-05-16 09:49:00.876296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.363 [2024-05-16 09:49:00.876304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.363 [2024-05-16 09:49:00.876311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.363 [2024-05-16 09:49:00.876328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.363 qpair failed and we were unable to recover it. 00:36:07.363 [2024-05-16 09:49:00.886292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.363 [2024-05-16 09:49:00.886361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.363 [2024-05-16 09:49:00.886380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.363 [2024-05-16 09:49:00.886389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.363 [2024-05-16 09:49:00.886396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.363 [2024-05-16 09:49:00.886414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.363 qpair failed and we were unable to recover it. 00:36:07.363 [2024-05-16 09:49:00.896303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.363 [2024-05-16 09:49:00.896383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.363 [2024-05-16 09:49:00.896409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.363 [2024-05-16 09:49:00.896418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.363 [2024-05-16 09:49:00.896424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.363 [2024-05-16 09:49:00.896440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.363 qpair failed and we were unable to recover it. 00:36:07.363 [2024-05-16 09:49:00.906317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.363 [2024-05-16 09:49:00.906417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.363 [2024-05-16 09:49:00.906437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.363 [2024-05-16 09:49:00.906445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.363 [2024-05-16 09:49:00.906452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.363 [2024-05-16 09:49:00.906470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.363 qpair failed and we were unable to recover it. 00:36:07.363 [2024-05-16 09:49:00.916377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.363 [2024-05-16 09:49:00.916442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.363 [2024-05-16 09:49:00.916462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.363 [2024-05-16 09:49:00.916470] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.363 [2024-05-16 09:49:00.916476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.363 [2024-05-16 09:49:00.916493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.363 qpair failed and we were unable to recover it. 00:36:07.626 [2024-05-16 09:49:00.926415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.626 [2024-05-16 09:49:00.926530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.626 [2024-05-16 09:49:00.926550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.626 [2024-05-16 09:49:00.926558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.626 [2024-05-16 09:49:00.926566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.626 [2024-05-16 09:49:00.926583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-05-16 09:49:00.936424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.626 [2024-05-16 09:49:00.936501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.626 [2024-05-16 09:49:00.936521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.626 [2024-05-16 09:49:00.936529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.626 [2024-05-16 09:49:00.936536] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.626 [2024-05-16 09:49:00.936560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-05-16 09:49:00.946445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.626 [2024-05-16 09:49:00.946514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.626 [2024-05-16 09:49:00.946534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.626 [2024-05-16 09:49:00.946542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.626 [2024-05-16 09:49:00.946549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.626 [2024-05-16 09:49:00.946567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-05-16 09:49:00.956460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.626 [2024-05-16 09:49:00.956525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.626 [2024-05-16 09:49:00.956546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.626 [2024-05-16 09:49:00.956554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.626 [2024-05-16 09:49:00.956561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.626 [2024-05-16 09:49:00.956577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-05-16 09:49:00.966516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.627 [2024-05-16 09:49:00.966582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.627 [2024-05-16 09:49:00.966601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.627 [2024-05-16 09:49:00.966610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.627 [2024-05-16 09:49:00.966616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.627 [2024-05-16 09:49:00.966633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-05-16 09:49:00.976544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.627 [2024-05-16 09:49:00.976614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.627 [2024-05-16 09:49:00.976634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.627 [2024-05-16 09:49:00.976642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.627 [2024-05-16 09:49:00.976649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.627 [2024-05-16 09:49:00.976666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-05-16 09:49:00.986599] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.627 [2024-05-16 09:49:00.986675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.627 [2024-05-16 09:49:00.986694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.627 [2024-05-16 09:49:00.986702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.627 [2024-05-16 09:49:00.986709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.627 [2024-05-16 09:49:00.986726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-05-16 09:49:00.996590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.627 [2024-05-16 09:49:00.996650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.627 [2024-05-16 09:49:00.996669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.627 [2024-05-16 09:49:00.996677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.627 [2024-05-16 09:49:00.996684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.627 [2024-05-16 09:49:00.996700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-05-16 09:49:01.006676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.627 [2024-05-16 09:49:01.006741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.627 [2024-05-16 09:49:01.006761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.627 [2024-05-16 09:49:01.006769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.627 [2024-05-16 09:49:01.006776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.627 [2024-05-16 09:49:01.006793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-05-16 09:49:01.016556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.627 [2024-05-16 09:49:01.016640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.627 [2024-05-16 09:49:01.016660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.627 [2024-05-16 09:49:01.016667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.627 [2024-05-16 09:49:01.016676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.627 [2024-05-16 09:49:01.016692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-05-16 09:49:01.026581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.627 [2024-05-16 09:49:01.026644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.627 [2024-05-16 09:49:01.026664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.627 [2024-05-16 09:49:01.026672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.627 [2024-05-16 09:49:01.026685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.627 [2024-05-16 09:49:01.026703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-05-16 09:49:01.036628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.627 [2024-05-16 09:49:01.036701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.627 [2024-05-16 09:49:01.036723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.627 [2024-05-16 09:49:01.036732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.627 [2024-05-16 09:49:01.036738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.627 [2024-05-16 09:49:01.036758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-05-16 09:49:01.046658] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.627 [2024-05-16 09:49:01.046728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.627 [2024-05-16 09:49:01.046749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.627 [2024-05-16 09:49:01.046758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.627 [2024-05-16 09:49:01.046765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.627 [2024-05-16 09:49:01.046781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-05-16 09:49:01.056803] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.627 [2024-05-16 09:49:01.056885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.627 [2024-05-16 09:49:01.056905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.627 [2024-05-16 09:49:01.056914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.627 [2024-05-16 09:49:01.056920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.627 [2024-05-16 09:49:01.056939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-05-16 09:49:01.066732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.627 [2024-05-16 09:49:01.066807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.627 [2024-05-16 09:49:01.066844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.627 [2024-05-16 09:49:01.066855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.627 [2024-05-16 09:49:01.066862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.627 [2024-05-16 09:49:01.066886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-05-16 09:49:01.076843] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.627 [2024-05-16 09:49:01.076913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.627 [2024-05-16 09:49:01.076951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.627 [2024-05-16 09:49:01.076963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.627 [2024-05-16 09:49:01.076970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.627 [2024-05-16 09:49:01.076994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-05-16 09:49:01.086799] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.627 [2024-05-16 09:49:01.086868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.627 [2024-05-16 09:49:01.086890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.627 [2024-05-16 09:49:01.086898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.627 [2024-05-16 09:49:01.086906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.627 [2024-05-16 09:49:01.086925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-05-16 09:49:01.096946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.627 [2024-05-16 09:49:01.097081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.627 [2024-05-16 09:49:01.097103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.627 [2024-05-16 09:49:01.097111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.627 [2024-05-16 09:49:01.097118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.628 [2024-05-16 09:49:01.097136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-05-16 09:49:01.106863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.628 [2024-05-16 09:49:01.106927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.628 [2024-05-16 09:49:01.106947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.628 [2024-05-16 09:49:01.106955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.628 [2024-05-16 09:49:01.106962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.628 [2024-05-16 09:49:01.106979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-05-16 09:49:01.116962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.628 [2024-05-16 09:49:01.117028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.628 [2024-05-16 09:49:01.117048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.628 [2024-05-16 09:49:01.117074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.628 [2024-05-16 09:49:01.117082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.628 [2024-05-16 09:49:01.117099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-05-16 09:49:01.127021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.628 [2024-05-16 09:49:01.127090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.628 [2024-05-16 09:49:01.127111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.628 [2024-05-16 09:49:01.127119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.628 [2024-05-16 09:49:01.127126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.628 [2024-05-16 09:49:01.127144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-05-16 09:49:01.137065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.628 [2024-05-16 09:49:01.137144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.628 [2024-05-16 09:49:01.137163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.628 [2024-05-16 09:49:01.137171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.628 [2024-05-16 09:49:01.137177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.628 [2024-05-16 09:49:01.137194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-05-16 09:49:01.147032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.628 [2024-05-16 09:49:01.147092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.628 [2024-05-16 09:49:01.147110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.628 [2024-05-16 09:49:01.147118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.628 [2024-05-16 09:49:01.147124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.628 [2024-05-16 09:49:01.147140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-05-16 09:49:01.157068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.628 [2024-05-16 09:49:01.157131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.628 [2024-05-16 09:49:01.157149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.628 [2024-05-16 09:49:01.157157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.628 [2024-05-16 09:49:01.157164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.628 [2024-05-16 09:49:01.157180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-05-16 09:49:01.167131] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.628 [2024-05-16 09:49:01.167193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.628 [2024-05-16 09:49:01.167211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.628 [2024-05-16 09:49:01.167219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.628 [2024-05-16 09:49:01.167226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.628 [2024-05-16 09:49:01.167242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-05-16 09:49:01.177154] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.628 [2024-05-16 09:49:01.177216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.628 [2024-05-16 09:49:01.177233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.628 [2024-05-16 09:49:01.177241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.628 [2024-05-16 09:49:01.177248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.628 [2024-05-16 09:49:01.177264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.891 [2024-05-16 09:49:01.187183] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.891 [2024-05-16 09:49:01.187245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.891 [2024-05-16 09:49:01.187263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.891 [2024-05-16 09:49:01.187272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.891 [2024-05-16 09:49:01.187281] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.891 [2024-05-16 09:49:01.187297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.891 qpair failed and we were unable to recover it. 00:36:07.891 [2024-05-16 09:49:01.197166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.891 [2024-05-16 09:49:01.197253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.891 [2024-05-16 09:49:01.197270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.891 [2024-05-16 09:49:01.197278] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.891 [2024-05-16 09:49:01.197286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.891 [2024-05-16 09:49:01.197301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.891 qpair failed and we were unable to recover it. 00:36:07.891 [2024-05-16 09:49:01.207236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.891 [2024-05-16 09:49:01.207302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.891 [2024-05-16 09:49:01.207323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.891 [2024-05-16 09:49:01.207331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.891 [2024-05-16 09:49:01.207338] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.891 [2024-05-16 09:49:01.207352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.891 qpair failed and we were unable to recover it. 00:36:07.891 [2024-05-16 09:49:01.217253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.891 [2024-05-16 09:49:01.217316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.891 [2024-05-16 09:49:01.217332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.891 [2024-05-16 09:49:01.217339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.891 [2024-05-16 09:49:01.217345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.891 [2024-05-16 09:49:01.217360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.891 qpair failed and we were unable to recover it. 00:36:07.891 [2024-05-16 09:49:01.227278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.891 [2024-05-16 09:49:01.227329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.891 [2024-05-16 09:49:01.227345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.891 [2024-05-16 09:49:01.227353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.891 [2024-05-16 09:49:01.227359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.891 [2024-05-16 09:49:01.227374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.891 qpair failed and we were unable to recover it. 00:36:07.891 [2024-05-16 09:49:01.237299] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.891 [2024-05-16 09:49:01.237351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.891 [2024-05-16 09:49:01.237366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.891 [2024-05-16 09:49:01.237373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.891 [2024-05-16 09:49:01.237379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.891 [2024-05-16 09:49:01.237394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.891 qpair failed and we were unable to recover it. 00:36:07.891 [2024-05-16 09:49:01.247240] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.891 [2024-05-16 09:49:01.247294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.891 [2024-05-16 09:49:01.247308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.891 [2024-05-16 09:49:01.247316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.891 [2024-05-16 09:49:01.247322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.891 [2024-05-16 09:49:01.247340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.891 qpair failed and we were unable to recover it. 00:36:07.891 [2024-05-16 09:49:01.257367] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.892 [2024-05-16 09:49:01.257432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.892 [2024-05-16 09:49:01.257448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.892 [2024-05-16 09:49:01.257455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.892 [2024-05-16 09:49:01.257461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.892 [2024-05-16 09:49:01.257475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.892 qpair failed and we were unable to recover it. 00:36:07.892 [2024-05-16 09:49:01.267447] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.892 [2024-05-16 09:49:01.267509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.892 [2024-05-16 09:49:01.267525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.892 [2024-05-16 09:49:01.267532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.892 [2024-05-16 09:49:01.267538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.892 [2024-05-16 09:49:01.267552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.892 qpair failed and we were unable to recover it. 00:36:07.892 [2024-05-16 09:49:01.277458] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.892 [2024-05-16 09:49:01.277510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.892 [2024-05-16 09:49:01.277525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.892 [2024-05-16 09:49:01.277532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.892 [2024-05-16 09:49:01.277538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.892 [2024-05-16 09:49:01.277552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.892 qpair failed and we were unable to recover it. 00:36:07.892 [2024-05-16 09:49:01.287512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.892 [2024-05-16 09:49:01.287568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.892 [2024-05-16 09:49:01.287582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.892 [2024-05-16 09:49:01.287589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.892 [2024-05-16 09:49:01.287596] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.892 [2024-05-16 09:49:01.287610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.892 qpair failed and we were unable to recover it. 00:36:07.892 [2024-05-16 09:49:01.297523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.892 [2024-05-16 09:49:01.297587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.892 [2024-05-16 09:49:01.297606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.892 [2024-05-16 09:49:01.297613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.892 [2024-05-16 09:49:01.297619] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.892 [2024-05-16 09:49:01.297633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.892 qpair failed and we were unable to recover it. 00:36:07.892 [2024-05-16 09:49:01.307464] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.892 [2024-05-16 09:49:01.307509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.892 [2024-05-16 09:49:01.307523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.892 [2024-05-16 09:49:01.307531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.892 [2024-05-16 09:49:01.307537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.892 [2024-05-16 09:49:01.307551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.892 qpair failed and we were unable to recover it. 00:36:07.892 [2024-05-16 09:49:01.317523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.892 [2024-05-16 09:49:01.317602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.892 [2024-05-16 09:49:01.317616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.892 [2024-05-16 09:49:01.317623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.892 [2024-05-16 09:49:01.317630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.892 [2024-05-16 09:49:01.317644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.892 qpair failed and we were unable to recover it. 00:36:07.892 [2024-05-16 09:49:01.327533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.892 [2024-05-16 09:49:01.327597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.892 [2024-05-16 09:49:01.327611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.892 [2024-05-16 09:49:01.327618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.892 [2024-05-16 09:49:01.327625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.892 [2024-05-16 09:49:01.327639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.892 qpair failed and we were unable to recover it. 00:36:07.892 [2024-05-16 09:49:01.337582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.892 [2024-05-16 09:49:01.337638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.892 [2024-05-16 09:49:01.337652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.892 [2024-05-16 09:49:01.337660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.892 [2024-05-16 09:49:01.337666] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.892 [2024-05-16 09:49:01.337684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.892 qpair failed and we were unable to recover it. 00:36:07.892 [2024-05-16 09:49:01.347555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.892 [2024-05-16 09:49:01.347604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.892 [2024-05-16 09:49:01.347618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.892 [2024-05-16 09:49:01.347625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.892 [2024-05-16 09:49:01.347631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.892 [2024-05-16 09:49:01.347644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.892 qpair failed and we were unable to recover it. 00:36:07.892 [2024-05-16 09:49:01.357620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.892 [2024-05-16 09:49:01.357671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.892 [2024-05-16 09:49:01.357685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.892 [2024-05-16 09:49:01.357692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.892 [2024-05-16 09:49:01.357699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.892 [2024-05-16 09:49:01.357712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.892 qpair failed and we were unable to recover it. 00:36:07.892 [2024-05-16 09:49:01.367628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.892 [2024-05-16 09:49:01.367688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.892 [2024-05-16 09:49:01.367702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.892 [2024-05-16 09:49:01.367709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.892 [2024-05-16 09:49:01.367715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.892 [2024-05-16 09:49:01.367728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.892 qpair failed and we were unable to recover it. 00:36:07.892 [2024-05-16 09:49:01.377691] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.892 [2024-05-16 09:49:01.377746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.892 [2024-05-16 09:49:01.377760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.892 [2024-05-16 09:49:01.377767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.892 [2024-05-16 09:49:01.377773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.892 [2024-05-16 09:49:01.377787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.892 qpair failed and we were unable to recover it. 00:36:07.892 [2024-05-16 09:49:01.387709] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.892 [2024-05-16 09:49:01.387828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.892 [2024-05-16 09:49:01.387846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.892 [2024-05-16 09:49:01.387853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.892 [2024-05-16 09:49:01.387859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.892 [2024-05-16 09:49:01.387873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.893 qpair failed and we were unable to recover it. 00:36:07.893 [2024-05-16 09:49:01.397746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.893 [2024-05-16 09:49:01.397803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.893 [2024-05-16 09:49:01.397827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.893 [2024-05-16 09:49:01.397836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.893 [2024-05-16 09:49:01.397842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.893 [2024-05-16 09:49:01.397861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.893 qpair failed and we were unable to recover it. 00:36:07.893 [2024-05-16 09:49:01.407780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.893 [2024-05-16 09:49:01.407840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.893 [2024-05-16 09:49:01.407865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.893 [2024-05-16 09:49:01.407873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.893 [2024-05-16 09:49:01.407880] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.893 [2024-05-16 09:49:01.407899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.893 qpair failed and we were unable to recover it. 00:36:07.893 [2024-05-16 09:49:01.417776] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.893 [2024-05-16 09:49:01.417838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.893 [2024-05-16 09:49:01.417854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.893 [2024-05-16 09:49:01.417861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.893 [2024-05-16 09:49:01.417868] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.893 [2024-05-16 09:49:01.417883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.893 qpair failed and we were unable to recover it. 00:36:07.893 [2024-05-16 09:49:01.427785] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.893 [2024-05-16 09:49:01.427839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.893 [2024-05-16 09:49:01.427854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.893 [2024-05-16 09:49:01.427862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.893 [2024-05-16 09:49:01.427873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.893 [2024-05-16 09:49:01.427888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.893 qpair failed and we were unable to recover it. 00:36:07.893 [2024-05-16 09:49:01.437856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.893 [2024-05-16 09:49:01.437916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.893 [2024-05-16 09:49:01.437940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.893 [2024-05-16 09:49:01.437948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.893 [2024-05-16 09:49:01.437955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.893 [2024-05-16 09:49:01.437974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.893 qpair failed and we were unable to recover it. 00:36:07.893 [2024-05-16 09:49:01.447856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.893 [2024-05-16 09:49:01.447917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.893 [2024-05-16 09:49:01.447932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.893 [2024-05-16 09:49:01.447939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.893 [2024-05-16 09:49:01.447946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:07.893 [2024-05-16 09:49:01.447960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.893 qpair failed and we were unable to recover it. 00:36:08.155 [2024-05-16 09:49:01.457919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.155 [2024-05-16 09:49:01.457999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.156 [2024-05-16 09:49:01.458014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.156 [2024-05-16 09:49:01.458021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.156 [2024-05-16 09:49:01.458028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.156 [2024-05-16 09:49:01.458042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.156 qpair failed and we were unable to recover it. 00:36:08.156 [2024-05-16 09:49:01.467897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.156 [2024-05-16 09:49:01.467943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.156 [2024-05-16 09:49:01.467958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.156 [2024-05-16 09:49:01.467965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.156 [2024-05-16 09:49:01.467971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.156 [2024-05-16 09:49:01.467985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.156 qpair failed and we were unable to recover it. 00:36:08.156 [2024-05-16 09:49:01.477970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.156 [2024-05-16 09:49:01.478026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.156 [2024-05-16 09:49:01.478040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.156 [2024-05-16 09:49:01.478048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.156 [2024-05-16 09:49:01.478058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.156 [2024-05-16 09:49:01.478072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.156 qpair failed and we were unable to recover it. 00:36:08.156 [2024-05-16 09:49:01.488013] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.156 [2024-05-16 09:49:01.488118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.156 [2024-05-16 09:49:01.488133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.156 [2024-05-16 09:49:01.488140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.156 [2024-05-16 09:49:01.488146] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.156 [2024-05-16 09:49:01.488161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.156 qpair failed and we were unable to recover it. 00:36:08.156 [2024-05-16 09:49:01.497996] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.156 [2024-05-16 09:49:01.498057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.156 [2024-05-16 09:49:01.498071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.156 [2024-05-16 09:49:01.498079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.156 [2024-05-16 09:49:01.498085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.156 [2024-05-16 09:49:01.498099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.156 qpair failed and we were unable to recover it. 00:36:08.156 [2024-05-16 09:49:01.508014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.156 [2024-05-16 09:49:01.508066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.156 [2024-05-16 09:49:01.508081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.156 [2024-05-16 09:49:01.508088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.156 [2024-05-16 09:49:01.508094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.156 [2024-05-16 09:49:01.508108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.156 qpair failed and we were unable to recover it. 00:36:08.156 [2024-05-16 09:49:01.518034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.156 [2024-05-16 09:49:01.518092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.156 [2024-05-16 09:49:01.518106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.156 [2024-05-16 09:49:01.518118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.156 [2024-05-16 09:49:01.518124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.156 [2024-05-16 09:49:01.518138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.156 qpair failed and we were unable to recover it. 00:36:08.156 [2024-05-16 09:49:01.528024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.156 [2024-05-16 09:49:01.528090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.156 [2024-05-16 09:49:01.528105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.156 [2024-05-16 09:49:01.528112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.156 [2024-05-16 09:49:01.528118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.156 [2024-05-16 09:49:01.528133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.156 qpair failed and we were unable to recover it. 00:36:08.156 [2024-05-16 09:49:01.538011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.156 [2024-05-16 09:49:01.538073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.156 [2024-05-16 09:49:01.538087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.156 [2024-05-16 09:49:01.538094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.156 [2024-05-16 09:49:01.538101] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.156 [2024-05-16 09:49:01.538115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.156 qpair failed and we were unable to recover it. 00:36:08.156 [2024-05-16 09:49:01.547988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.156 [2024-05-16 09:49:01.548032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.156 [2024-05-16 09:49:01.548046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.156 [2024-05-16 09:49:01.548057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.156 [2024-05-16 09:49:01.548063] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.156 [2024-05-16 09:49:01.548077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.156 qpair failed and we were unable to recover it. 00:36:08.156 [2024-05-16 09:49:01.558187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.156 [2024-05-16 09:49:01.558243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.156 [2024-05-16 09:49:01.558257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.156 [2024-05-16 09:49:01.558265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.156 [2024-05-16 09:49:01.558271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.156 [2024-05-16 09:49:01.558286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.156 qpair failed and we were unable to recover it. 00:36:08.156 [2024-05-16 09:49:01.568197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.156 [2024-05-16 09:49:01.568248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.156 [2024-05-16 09:49:01.568263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.156 [2024-05-16 09:49:01.568270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.156 [2024-05-16 09:49:01.568276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.156 [2024-05-16 09:49:01.568290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.156 qpair failed and we were unable to recover it. 00:36:08.156 [2024-05-16 09:49:01.578115] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.156 [2024-05-16 09:49:01.578173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.156 [2024-05-16 09:49:01.578187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.156 [2024-05-16 09:49:01.578194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.156 [2024-05-16 09:49:01.578200] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.157 [2024-05-16 09:49:01.578213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.157 qpair failed and we were unable to recover it. 00:36:08.157 [2024-05-16 09:49:01.588107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.157 [2024-05-16 09:49:01.588175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.157 [2024-05-16 09:49:01.588190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.157 [2024-05-16 09:49:01.588197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.157 [2024-05-16 09:49:01.588203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.157 [2024-05-16 09:49:01.588217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.157 qpair failed and we were unable to recover it. 00:36:08.157 [2024-05-16 09:49:01.598175] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.157 [2024-05-16 09:49:01.598223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.157 [2024-05-16 09:49:01.598237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.157 [2024-05-16 09:49:01.598244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.157 [2024-05-16 09:49:01.598250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.157 [2024-05-16 09:49:01.598264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.157 qpair failed and we were unable to recover it. 00:36:08.157 [2024-05-16 09:49:01.608327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.157 [2024-05-16 09:49:01.608378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.157 [2024-05-16 09:49:01.608392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.157 [2024-05-16 09:49:01.608403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.157 [2024-05-16 09:49:01.608409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.157 [2024-05-16 09:49:01.608423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.157 qpair failed and we were unable to recover it. 00:36:08.157 [2024-05-16 09:49:01.618337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.157 [2024-05-16 09:49:01.618391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.157 [2024-05-16 09:49:01.618405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.157 [2024-05-16 09:49:01.618413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.157 [2024-05-16 09:49:01.618419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.157 [2024-05-16 09:49:01.618432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.157 qpair failed and we were unable to recover it. 00:36:08.157 [2024-05-16 09:49:01.628226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.157 [2024-05-16 09:49:01.628281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.157 [2024-05-16 09:49:01.628296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.157 [2024-05-16 09:49:01.628303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.157 [2024-05-16 09:49:01.628309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.157 [2024-05-16 09:49:01.628323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.157 qpair failed and we were unable to recover it. 00:36:08.157 [2024-05-16 09:49:01.638405] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.157 [2024-05-16 09:49:01.638461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.157 [2024-05-16 09:49:01.638475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.157 [2024-05-16 09:49:01.638483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.157 [2024-05-16 09:49:01.638489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.157 [2024-05-16 09:49:01.638502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.157 qpair failed and we were unable to recover it. 00:36:08.157 [2024-05-16 09:49:01.648427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.157 [2024-05-16 09:49:01.648478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.157 [2024-05-16 09:49:01.648492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.157 [2024-05-16 09:49:01.648499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.157 [2024-05-16 09:49:01.648506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.157 [2024-05-16 09:49:01.648519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.157 qpair failed and we were unable to recover it. 00:36:08.157 [2024-05-16 09:49:01.658425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.157 [2024-05-16 09:49:01.658482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.157 [2024-05-16 09:49:01.658496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.157 [2024-05-16 09:49:01.658504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.157 [2024-05-16 09:49:01.658510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.157 [2024-05-16 09:49:01.658523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.157 qpair failed and we were unable to recover it. 00:36:08.157 [2024-05-16 09:49:01.668433] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.157 [2024-05-16 09:49:01.668478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.157 [2024-05-16 09:49:01.668492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.157 [2024-05-16 09:49:01.668499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.157 [2024-05-16 09:49:01.668505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.157 [2024-05-16 09:49:01.668518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.157 qpair failed and we were unable to recover it. 00:36:08.157 [2024-05-16 09:49:01.678505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.157 [2024-05-16 09:49:01.678560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.157 [2024-05-16 09:49:01.678574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.157 [2024-05-16 09:49:01.678582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.157 [2024-05-16 09:49:01.678588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.158 [2024-05-16 09:49:01.678601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.158 qpair failed and we were unable to recover it. 00:36:08.158 [2024-05-16 09:49:01.688547] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.158 [2024-05-16 09:49:01.688604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.158 [2024-05-16 09:49:01.688618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.158 [2024-05-16 09:49:01.688624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.158 [2024-05-16 09:49:01.688630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.158 [2024-05-16 09:49:01.688644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.158 qpair failed and we were unable to recover it. 00:36:08.158 [2024-05-16 09:49:01.698563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.158 [2024-05-16 09:49:01.698619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.158 [2024-05-16 09:49:01.698636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.158 [2024-05-16 09:49:01.698643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.158 [2024-05-16 09:49:01.698650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.158 [2024-05-16 09:49:01.698663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.158 qpair failed and we were unable to recover it. 00:36:08.158 [2024-05-16 09:49:01.708560] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.158 [2024-05-16 09:49:01.708603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.158 [2024-05-16 09:49:01.708617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.158 [2024-05-16 09:49:01.708624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.158 [2024-05-16 09:49:01.708630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.158 [2024-05-16 09:49:01.708644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.158 qpair failed and we were unable to recover it. 00:36:08.421 [2024-05-16 09:49:01.718635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.421 [2024-05-16 09:49:01.718687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.421 [2024-05-16 09:49:01.718701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.421 [2024-05-16 09:49:01.718708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.421 [2024-05-16 09:49:01.718714] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.421 [2024-05-16 09:49:01.718727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.421 qpair failed and we were unable to recover it. 00:36:08.421 [2024-05-16 09:49:01.728656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.421 [2024-05-16 09:49:01.728705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.421 [2024-05-16 09:49:01.728719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.421 [2024-05-16 09:49:01.728726] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.421 [2024-05-16 09:49:01.728732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.421 [2024-05-16 09:49:01.728746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.421 qpair failed and we were unable to recover it. 00:36:08.421 [2024-05-16 09:49:01.738679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.421 [2024-05-16 09:49:01.738738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.421 [2024-05-16 09:49:01.738752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.421 [2024-05-16 09:49:01.738759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.421 [2024-05-16 09:49:01.738765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.421 [2024-05-16 09:49:01.738782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.421 qpair failed and we were unable to recover it. 00:36:08.421 [2024-05-16 09:49:01.748669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.421 [2024-05-16 09:49:01.748756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.421 [2024-05-16 09:49:01.748770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.421 [2024-05-16 09:49:01.748778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.421 [2024-05-16 09:49:01.748784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.421 [2024-05-16 09:49:01.748797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.421 qpair failed and we were unable to recover it. 00:36:08.421 [2024-05-16 09:49:01.758726] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.421 [2024-05-16 09:49:01.758781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.421 [2024-05-16 09:49:01.758805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.421 [2024-05-16 09:49:01.758814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.421 [2024-05-16 09:49:01.758821] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.421 [2024-05-16 09:49:01.758840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.421 qpair failed and we were unable to recover it. 00:36:08.421 [2024-05-16 09:49:01.768766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.421 [2024-05-16 09:49:01.768824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.421 [2024-05-16 09:49:01.768839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.421 [2024-05-16 09:49:01.768846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.421 [2024-05-16 09:49:01.768853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.421 [2024-05-16 09:49:01.768867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.421 qpair failed and we were unable to recover it. 00:36:08.421 [2024-05-16 09:49:01.778845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.421 [2024-05-16 09:49:01.778910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.421 [2024-05-16 09:49:01.778935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.421 [2024-05-16 09:49:01.778943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.421 [2024-05-16 09:49:01.778950] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.421 [2024-05-16 09:49:01.778969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.421 qpair failed and we were unable to recover it. 00:36:08.421 [2024-05-16 09:49:01.788778] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.421 [2024-05-16 09:49:01.788824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.421 [2024-05-16 09:49:01.788844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.421 [2024-05-16 09:49:01.788851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.421 [2024-05-16 09:49:01.788857] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.421 [2024-05-16 09:49:01.788872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.421 qpair failed and we were unable to recover it. 00:36:08.421 [2024-05-16 09:49:01.798811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.421 [2024-05-16 09:49:01.798858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.421 [2024-05-16 09:49:01.798873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.421 [2024-05-16 09:49:01.798880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.421 [2024-05-16 09:49:01.798887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.421 [2024-05-16 09:49:01.798901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.421 qpair failed and we were unable to recover it. 00:36:08.422 [2024-05-16 09:49:01.808875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.422 [2024-05-16 09:49:01.808924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.422 [2024-05-16 09:49:01.808938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.422 [2024-05-16 09:49:01.808946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.422 [2024-05-16 09:49:01.808952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.422 [2024-05-16 09:49:01.808965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.422 qpair failed and we were unable to recover it. 00:36:08.422 [2024-05-16 09:49:01.818900] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.422 [2024-05-16 09:49:01.819003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.422 [2024-05-16 09:49:01.819018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.422 [2024-05-16 09:49:01.819025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.422 [2024-05-16 09:49:01.819031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.422 [2024-05-16 09:49:01.819045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.422 qpair failed and we were unable to recover it. 00:36:08.422 [2024-05-16 09:49:01.828843] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.422 [2024-05-16 09:49:01.828890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.422 [2024-05-16 09:49:01.828904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.422 [2024-05-16 09:49:01.828912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.422 [2024-05-16 09:49:01.828932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.422 [2024-05-16 09:49:01.828946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.422 qpair failed and we were unable to recover it. 00:36:08.422 [2024-05-16 09:49:01.838944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.422 [2024-05-16 09:49:01.838998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.422 [2024-05-16 09:49:01.839012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.422 [2024-05-16 09:49:01.839019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.422 [2024-05-16 09:49:01.839026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.422 [2024-05-16 09:49:01.839039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.422 qpair failed and we were unable to recover it. 00:36:08.422 [2024-05-16 09:49:01.848976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.422 [2024-05-16 09:49:01.849029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.422 [2024-05-16 09:49:01.849043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.422 [2024-05-16 09:49:01.849050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.422 [2024-05-16 09:49:01.849062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.422 [2024-05-16 09:49:01.849076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.422 qpair failed and we were unable to recover it. 00:36:08.422 [2024-05-16 09:49:01.859003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.422 [2024-05-16 09:49:01.859061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.422 [2024-05-16 09:49:01.859075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.422 [2024-05-16 09:49:01.859082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.422 [2024-05-16 09:49:01.859088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.422 [2024-05-16 09:49:01.859102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.422 qpair failed and we were unable to recover it. 00:36:08.422 [2024-05-16 09:49:01.868937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.422 [2024-05-16 09:49:01.868987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.422 [2024-05-16 09:49:01.869001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.422 [2024-05-16 09:49:01.869008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.422 [2024-05-16 09:49:01.869014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.422 [2024-05-16 09:49:01.869028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.422 qpair failed and we were unable to recover it. 00:36:08.422 [2024-05-16 09:49:01.879055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.422 [2024-05-16 09:49:01.879112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.422 [2024-05-16 09:49:01.879127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.422 [2024-05-16 09:49:01.879134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.422 [2024-05-16 09:49:01.879140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.422 [2024-05-16 09:49:01.879154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.422 qpair failed and we were unable to recover it. 00:36:08.422 [2024-05-16 09:49:01.889083] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.422 [2024-05-16 09:49:01.889158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.422 [2024-05-16 09:49:01.889172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.422 [2024-05-16 09:49:01.889179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.422 [2024-05-16 09:49:01.889185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.422 [2024-05-16 09:49:01.889199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.422 qpair failed and we were unable to recover it. 00:36:08.422 [2024-05-16 09:49:01.899099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.422 [2024-05-16 09:49:01.899158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.422 [2024-05-16 09:49:01.899172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.422 [2024-05-16 09:49:01.899179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.422 [2024-05-16 09:49:01.899185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.422 [2024-05-16 09:49:01.899199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.423 qpair failed and we were unable to recover it. 00:36:08.423 [2024-05-16 09:49:01.909055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.423 [2024-05-16 09:49:01.909152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.423 [2024-05-16 09:49:01.909167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.423 [2024-05-16 09:49:01.909174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.423 [2024-05-16 09:49:01.909180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.423 [2024-05-16 09:49:01.909194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.423 qpair failed and we were unable to recover it. 00:36:08.423 [2024-05-16 09:49:01.919151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.423 [2024-05-16 09:49:01.919204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.423 [2024-05-16 09:49:01.919218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.423 [2024-05-16 09:49:01.919232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.423 [2024-05-16 09:49:01.919238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.423 [2024-05-16 09:49:01.919252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.423 qpair failed and we were unable to recover it. 00:36:08.423 [2024-05-16 09:49:01.929081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.423 [2024-05-16 09:49:01.929135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.423 [2024-05-16 09:49:01.929149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.423 [2024-05-16 09:49:01.929156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.423 [2024-05-16 09:49:01.929163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.423 [2024-05-16 09:49:01.929177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.423 qpair failed and we were unable to recover it. 00:36:08.423 [2024-05-16 09:49:01.939202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.423 [2024-05-16 09:49:01.939261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.423 [2024-05-16 09:49:01.939275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.423 [2024-05-16 09:49:01.939282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.423 [2024-05-16 09:49:01.939289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.423 [2024-05-16 09:49:01.939303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.423 qpair failed and we were unable to recover it. 00:36:08.423 [2024-05-16 09:49:01.949188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.423 [2024-05-16 09:49:01.949238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.423 [2024-05-16 09:49:01.949252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.423 [2024-05-16 09:49:01.949259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.423 [2024-05-16 09:49:01.949265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.423 [2024-05-16 09:49:01.949279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.423 qpair failed and we were unable to recover it. 00:36:08.423 [2024-05-16 09:49:01.959154] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.423 [2024-05-16 09:49:01.959203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.423 [2024-05-16 09:49:01.959217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.423 [2024-05-16 09:49:01.959225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.423 [2024-05-16 09:49:01.959231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.423 [2024-05-16 09:49:01.959245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.423 qpair failed and we were unable to recover it. 00:36:08.423 [2024-05-16 09:49:01.969323] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.423 [2024-05-16 09:49:01.969376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.423 [2024-05-16 09:49:01.969391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.423 [2024-05-16 09:49:01.969398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.423 [2024-05-16 09:49:01.969404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.423 [2024-05-16 09:49:01.969417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.423 qpair failed and we were unable to recover it. 00:36:08.686 [2024-05-16 09:49:01.979212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.686 [2024-05-16 09:49:01.979265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.686 [2024-05-16 09:49:01.979279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.686 [2024-05-16 09:49:01.979286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.686 [2024-05-16 09:49:01.979293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.686 [2024-05-16 09:49:01.979307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.686 qpair failed and we were unable to recover it. 00:36:08.686 [2024-05-16 09:49:01.989194] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.686 [2024-05-16 09:49:01.989241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.686 [2024-05-16 09:49:01.989255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.686 [2024-05-16 09:49:01.989263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.686 [2024-05-16 09:49:01.989269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.686 [2024-05-16 09:49:01.989282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.686 qpair failed and we were unable to recover it. 00:36:08.686 [2024-05-16 09:49:01.999363] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.686 [2024-05-16 09:49:01.999416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.686 [2024-05-16 09:49:01.999434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.686 [2024-05-16 09:49:01.999441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.686 [2024-05-16 09:49:01.999447] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.686 [2024-05-16 09:49:01.999461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.686 qpair failed and we were unable to recover it. 00:36:08.686 [2024-05-16 09:49:02.009407] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.686 [2024-05-16 09:49:02.009464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.686 [2024-05-16 09:49:02.009478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.686 [2024-05-16 09:49:02.009489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.686 [2024-05-16 09:49:02.009495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.686 [2024-05-16 09:49:02.009508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.686 qpair failed and we were unable to recover it. 00:36:08.686 [2024-05-16 09:49:02.019330] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.687 [2024-05-16 09:49:02.019429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.687 [2024-05-16 09:49:02.019444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.687 [2024-05-16 09:49:02.019451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.687 [2024-05-16 09:49:02.019458] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.687 [2024-05-16 09:49:02.019472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.687 qpair failed and we were unable to recover it. 00:36:08.687 [2024-05-16 09:49:02.029422] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.687 [2024-05-16 09:49:02.029474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.687 [2024-05-16 09:49:02.029489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.687 [2024-05-16 09:49:02.029496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.687 [2024-05-16 09:49:02.029502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.687 [2024-05-16 09:49:02.029515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.687 qpair failed and we were unable to recover it. 00:36:08.687 [2024-05-16 09:49:02.039438] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.687 [2024-05-16 09:49:02.039490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.687 [2024-05-16 09:49:02.039504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.687 [2024-05-16 09:49:02.039512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.687 [2024-05-16 09:49:02.039518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.687 [2024-05-16 09:49:02.039532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.687 qpair failed and we were unable to recover it. 00:36:08.687 [2024-05-16 09:49:02.049490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.687 [2024-05-16 09:49:02.049546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.687 [2024-05-16 09:49:02.049560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.687 [2024-05-16 09:49:02.049567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.687 [2024-05-16 09:49:02.049573] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.687 [2024-05-16 09:49:02.049587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.687 qpair failed and we were unable to recover it. 00:36:08.687 [2024-05-16 09:49:02.059549] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.687 [2024-05-16 09:49:02.059607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.687 [2024-05-16 09:49:02.059621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.687 [2024-05-16 09:49:02.059628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.687 [2024-05-16 09:49:02.059635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.687 [2024-05-16 09:49:02.059649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.687 qpair failed and we were unable to recover it. 00:36:08.687 [2024-05-16 09:49:02.069532] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.687 [2024-05-16 09:49:02.069585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.687 [2024-05-16 09:49:02.069600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.687 [2024-05-16 09:49:02.069607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.687 [2024-05-16 09:49:02.069613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.687 [2024-05-16 09:49:02.069626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.687 qpair failed and we were unable to recover it. 00:36:08.687 [2024-05-16 09:49:02.079606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.687 [2024-05-16 09:49:02.079655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.687 [2024-05-16 09:49:02.079669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.687 [2024-05-16 09:49:02.079676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.687 [2024-05-16 09:49:02.079683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.687 [2024-05-16 09:49:02.079696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.687 qpair failed and we were unable to recover it. 00:36:08.687 [2024-05-16 09:49:02.089617] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.687 [2024-05-16 09:49:02.089667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.687 [2024-05-16 09:49:02.089681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.687 [2024-05-16 09:49:02.089689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.687 [2024-05-16 09:49:02.089695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.687 [2024-05-16 09:49:02.089709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.687 qpair failed and we were unable to recover it. 00:36:08.687 [2024-05-16 09:49:02.099651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.687 [2024-05-16 09:49:02.099705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.687 [2024-05-16 09:49:02.099722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.687 [2024-05-16 09:49:02.099729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.687 [2024-05-16 09:49:02.099735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.687 [2024-05-16 09:49:02.099749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.687 qpair failed and we were unable to recover it. 00:36:08.687 [2024-05-16 09:49:02.109653] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.687 [2024-05-16 09:49:02.109701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.687 [2024-05-16 09:49:02.109716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.687 [2024-05-16 09:49:02.109723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.687 [2024-05-16 09:49:02.109729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.687 [2024-05-16 09:49:02.109743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.687 qpair failed and we were unable to recover it. 00:36:08.687 [2024-05-16 09:49:02.119742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.687 [2024-05-16 09:49:02.119802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.687 [2024-05-16 09:49:02.119816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.687 [2024-05-16 09:49:02.119823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.687 [2024-05-16 09:49:02.119830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.687 [2024-05-16 09:49:02.119844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.687 qpair failed and we were unable to recover it. 00:36:08.687 [2024-05-16 09:49:02.129744] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.687 [2024-05-16 09:49:02.129797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.687 [2024-05-16 09:49:02.129811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.687 [2024-05-16 09:49:02.129818] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.687 [2024-05-16 09:49:02.129824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.687 [2024-05-16 09:49:02.129838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.687 qpair failed and we were unable to recover it. 00:36:08.687 [2024-05-16 09:49:02.139759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.687 [2024-05-16 09:49:02.139819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.687 [2024-05-16 09:49:02.139834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.687 [2024-05-16 09:49:02.139842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.687 [2024-05-16 09:49:02.139849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.687 [2024-05-16 09:49:02.139868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.687 qpair failed and we were unable to recover it. 00:36:08.687 [2024-05-16 09:49:02.149764] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.687 [2024-05-16 09:49:02.149816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.687 [2024-05-16 09:49:02.149833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.687 [2024-05-16 09:49:02.149840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.687 [2024-05-16 09:49:02.149847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.688 [2024-05-16 09:49:02.149862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.688 qpair failed and we were unable to recover it. 00:36:08.688 [2024-05-16 09:49:02.159821] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.688 [2024-05-16 09:49:02.159876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.688 [2024-05-16 09:49:02.159901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.688 [2024-05-16 09:49:02.159910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.688 [2024-05-16 09:49:02.159916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.688 [2024-05-16 09:49:02.159936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.688 qpair failed and we were unable to recover it. 00:36:08.688 [2024-05-16 09:49:02.169866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.688 [2024-05-16 09:49:02.169920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.688 [2024-05-16 09:49:02.169937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.688 [2024-05-16 09:49:02.169944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.688 [2024-05-16 09:49:02.169951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.688 [2024-05-16 09:49:02.169966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.688 qpair failed and we were unable to recover it. 00:36:08.688 [2024-05-16 09:49:02.179875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.688 [2024-05-16 09:49:02.179929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.688 [2024-05-16 09:49:02.179943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.688 [2024-05-16 09:49:02.179951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.688 [2024-05-16 09:49:02.179957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.688 [2024-05-16 09:49:02.179972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.688 qpair failed and we were unable to recover it. 00:36:08.688 [2024-05-16 09:49:02.189872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.688 [2024-05-16 09:49:02.189917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.688 [2024-05-16 09:49:02.189936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.688 [2024-05-16 09:49:02.189944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.688 [2024-05-16 09:49:02.189950] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.688 [2024-05-16 09:49:02.189964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.688 qpair failed and we were unable to recover it. 00:36:08.688 [2024-05-16 09:49:02.199926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.688 [2024-05-16 09:49:02.199975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.688 [2024-05-16 09:49:02.199990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.688 [2024-05-16 09:49:02.199997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.688 [2024-05-16 09:49:02.200003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.688 [2024-05-16 09:49:02.200017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.688 qpair failed and we were unable to recover it. 00:36:08.688 [2024-05-16 09:49:02.209973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.688 [2024-05-16 09:49:02.210032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.688 [2024-05-16 09:49:02.210046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.688 [2024-05-16 09:49:02.210058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.688 [2024-05-16 09:49:02.210065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.688 [2024-05-16 09:49:02.210079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.688 qpair failed and we were unable to recover it. 00:36:08.688 [2024-05-16 09:49:02.219965] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.688 [2024-05-16 09:49:02.220022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.688 [2024-05-16 09:49:02.220036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.688 [2024-05-16 09:49:02.220043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.688 [2024-05-16 09:49:02.220049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.688 [2024-05-16 09:49:02.220068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.688 qpair failed and we were unable to recover it. 00:36:08.688 [2024-05-16 09:49:02.229981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.688 [2024-05-16 09:49:02.230030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.688 [2024-05-16 09:49:02.230045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.688 [2024-05-16 09:49:02.230057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.688 [2024-05-16 09:49:02.230067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.688 [2024-05-16 09:49:02.230082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.688 qpair failed and we were unable to recover it. 00:36:08.688 [2024-05-16 09:49:02.240030] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.688 [2024-05-16 09:49:02.240084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.688 [2024-05-16 09:49:02.240098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.688 [2024-05-16 09:49:02.240105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.688 [2024-05-16 09:49:02.240112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.688 [2024-05-16 09:49:02.240126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.688 qpair failed and we were unable to recover it. 00:36:08.952 [2024-05-16 09:49:02.250076] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.952 [2024-05-16 09:49:02.250127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.952 [2024-05-16 09:49:02.250141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.952 [2024-05-16 09:49:02.250148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.952 [2024-05-16 09:49:02.250155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.952 [2024-05-16 09:49:02.250169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.952 qpair failed and we were unable to recover it. 00:36:08.952 [2024-05-16 09:49:02.260130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.952 [2024-05-16 09:49:02.260204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.952 [2024-05-16 09:49:02.260219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.952 [2024-05-16 09:49:02.260226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.952 [2024-05-16 09:49:02.260232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.952 [2024-05-16 09:49:02.260246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.952 qpair failed and we were unable to recover it. 00:36:08.952 [2024-05-16 09:49:02.270091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.952 [2024-05-16 09:49:02.270137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.952 [2024-05-16 09:49:02.270152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.952 [2024-05-16 09:49:02.270159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.952 [2024-05-16 09:49:02.270165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.952 [2024-05-16 09:49:02.270179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.952 qpair failed and we were unable to recover it. 00:36:08.952 [2024-05-16 09:49:02.280149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.952 [2024-05-16 09:49:02.280207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.952 [2024-05-16 09:49:02.280222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.952 [2024-05-16 09:49:02.280229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.952 [2024-05-16 09:49:02.280235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.952 [2024-05-16 09:49:02.280249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.952 qpair failed and we were unable to recover it. 00:36:08.952 [2024-05-16 09:49:02.290200] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.952 [2024-05-16 09:49:02.290289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.952 [2024-05-16 09:49:02.290303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.952 [2024-05-16 09:49:02.290310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.952 [2024-05-16 09:49:02.290317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.952 [2024-05-16 09:49:02.290330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.952 qpair failed and we were unable to recover it. 00:36:08.952 [2024-05-16 09:49:02.300212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.952 [2024-05-16 09:49:02.300276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.952 [2024-05-16 09:49:02.300290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.952 [2024-05-16 09:49:02.300297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.952 [2024-05-16 09:49:02.300304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.952 [2024-05-16 09:49:02.300317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.952 qpair failed and we were unable to recover it. 00:36:08.952 [2024-05-16 09:49:02.310204] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.952 [2024-05-16 09:49:02.310256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.952 [2024-05-16 09:49:02.310270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.952 [2024-05-16 09:49:02.310278] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.952 [2024-05-16 09:49:02.310284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.952 [2024-05-16 09:49:02.310298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.952 qpair failed and we were unable to recover it. 00:36:08.952 [2024-05-16 09:49:02.320150] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.952 [2024-05-16 09:49:02.320217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.952 [2024-05-16 09:49:02.320231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.952 [2024-05-16 09:49:02.320238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.952 [2024-05-16 09:49:02.320247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.952 [2024-05-16 09:49:02.320261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.952 qpair failed and we were unable to recover it. 00:36:08.952 [2024-05-16 09:49:02.330313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.952 [2024-05-16 09:49:02.330366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.952 [2024-05-16 09:49:02.330380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.952 [2024-05-16 09:49:02.330387] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.952 [2024-05-16 09:49:02.330394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.952 [2024-05-16 09:49:02.330407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.952 qpair failed and we were unable to recover it. 00:36:08.952 [2024-05-16 09:49:02.340305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.952 [2024-05-16 09:49:02.340360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.952 [2024-05-16 09:49:02.340373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.952 [2024-05-16 09:49:02.340380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.952 [2024-05-16 09:49:02.340386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.952 [2024-05-16 09:49:02.340400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.952 qpair failed and we were unable to recover it. 00:36:08.952 [2024-05-16 09:49:02.350406] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.952 [2024-05-16 09:49:02.350491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.952 [2024-05-16 09:49:02.350505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.952 [2024-05-16 09:49:02.350512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.952 [2024-05-16 09:49:02.350518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.952 [2024-05-16 09:49:02.350532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.952 qpair failed and we were unable to recover it. 00:36:08.952 [2024-05-16 09:49:02.360340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.952 [2024-05-16 09:49:02.360389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.952 [2024-05-16 09:49:02.360403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.952 [2024-05-16 09:49:02.360410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.952 [2024-05-16 09:49:02.360417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.952 [2024-05-16 09:49:02.360430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.952 qpair failed and we were unable to recover it. 00:36:08.952 [2024-05-16 09:49:02.370390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.952 [2024-05-16 09:49:02.370441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.952 [2024-05-16 09:49:02.370456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.952 [2024-05-16 09:49:02.370463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.952 [2024-05-16 09:49:02.370469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.952 [2024-05-16 09:49:02.370482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.952 qpair failed and we were unable to recover it. 00:36:08.952 [2024-05-16 09:49:02.380425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.953 [2024-05-16 09:49:02.380481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.953 [2024-05-16 09:49:02.380496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.953 [2024-05-16 09:49:02.380503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.953 [2024-05-16 09:49:02.380509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.953 [2024-05-16 09:49:02.380523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.953 qpair failed and we were unable to recover it. 00:36:08.953 [2024-05-16 09:49:02.390403] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.953 [2024-05-16 09:49:02.390454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.953 [2024-05-16 09:49:02.390469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.953 [2024-05-16 09:49:02.390476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.953 [2024-05-16 09:49:02.390482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.953 [2024-05-16 09:49:02.390497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.953 qpair failed and we were unable to recover it. 00:36:08.953 [2024-05-16 09:49:02.400485] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.953 [2024-05-16 09:49:02.400536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.953 [2024-05-16 09:49:02.400550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.953 [2024-05-16 09:49:02.400558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.953 [2024-05-16 09:49:02.400564] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.953 [2024-05-16 09:49:02.400578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.953 qpair failed and we were unable to recover it. 00:36:08.953 [2024-05-16 09:49:02.410506] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.953 [2024-05-16 09:49:02.410564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.953 [2024-05-16 09:49:02.410578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.953 [2024-05-16 09:49:02.410589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.953 [2024-05-16 09:49:02.410595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.953 [2024-05-16 09:49:02.410609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.953 qpair failed and we were unable to recover it. 00:36:08.953 [2024-05-16 09:49:02.420510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.953 [2024-05-16 09:49:02.420587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.953 [2024-05-16 09:49:02.420601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.953 [2024-05-16 09:49:02.420608] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.953 [2024-05-16 09:49:02.420614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.953 [2024-05-16 09:49:02.420628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.953 qpair failed and we were unable to recover it. 00:36:08.953 [2024-05-16 09:49:02.430494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.953 [2024-05-16 09:49:02.430541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.953 [2024-05-16 09:49:02.430555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.953 [2024-05-16 09:49:02.430563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.953 [2024-05-16 09:49:02.430569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.953 [2024-05-16 09:49:02.430582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.953 qpair failed and we were unable to recover it. 00:36:08.953 [2024-05-16 09:49:02.440580] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.953 [2024-05-16 09:49:02.440633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.953 [2024-05-16 09:49:02.440648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.953 [2024-05-16 09:49:02.440655] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.953 [2024-05-16 09:49:02.440661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.953 [2024-05-16 09:49:02.440675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.953 qpair failed and we were unable to recover it. 00:36:08.953 [2024-05-16 09:49:02.450610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.953 [2024-05-16 09:49:02.450664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.953 [2024-05-16 09:49:02.450678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.953 [2024-05-16 09:49:02.450685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.953 [2024-05-16 09:49:02.450691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.953 [2024-05-16 09:49:02.450705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.953 qpair failed and we were unable to recover it. 00:36:08.953 [2024-05-16 09:49:02.460635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.953 [2024-05-16 09:49:02.460689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.953 [2024-05-16 09:49:02.460704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.953 [2024-05-16 09:49:02.460711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.953 [2024-05-16 09:49:02.460717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.953 [2024-05-16 09:49:02.460731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.953 qpair failed and we were unable to recover it. 00:36:08.953 [2024-05-16 09:49:02.470501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.953 [2024-05-16 09:49:02.470555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.953 [2024-05-16 09:49:02.470569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.953 [2024-05-16 09:49:02.470577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.953 [2024-05-16 09:49:02.470583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.953 [2024-05-16 09:49:02.470597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.953 qpair failed and we were unable to recover it. 00:36:08.953 [2024-05-16 09:49:02.480675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.953 [2024-05-16 09:49:02.480741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.953 [2024-05-16 09:49:02.480755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.953 [2024-05-16 09:49:02.480762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.953 [2024-05-16 09:49:02.480768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.953 [2024-05-16 09:49:02.480781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.953 qpair failed and we were unable to recover it. 00:36:08.953 [2024-05-16 09:49:02.490708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.953 [2024-05-16 09:49:02.490769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.953 [2024-05-16 09:49:02.490784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.953 [2024-05-16 09:49:02.490791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.953 [2024-05-16 09:49:02.490797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.953 [2024-05-16 09:49:02.490810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.953 qpair failed and we were unable to recover it. 00:36:08.953 [2024-05-16 09:49:02.500742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.953 [2024-05-16 09:49:02.500798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.953 [2024-05-16 09:49:02.500816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.953 [2024-05-16 09:49:02.500823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.953 [2024-05-16 09:49:02.500829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:08.953 [2024-05-16 09:49:02.500842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.953 qpair failed and we were unable to recover it. 00:36:09.216 [2024-05-16 09:49:02.510692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.216 [2024-05-16 09:49:02.510741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.216 [2024-05-16 09:49:02.510756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.216 [2024-05-16 09:49:02.510763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.216 [2024-05-16 09:49:02.510769] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.216 [2024-05-16 09:49:02.510783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.216 qpair failed and we were unable to recover it. 00:36:09.216 [2024-05-16 09:49:02.520762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.216 [2024-05-16 09:49:02.520815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.216 [2024-05-16 09:49:02.520829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.216 [2024-05-16 09:49:02.520836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.216 [2024-05-16 09:49:02.520842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.216 [2024-05-16 09:49:02.520856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.216 qpair failed and we were unable to recover it. 00:36:09.216 [2024-05-16 09:49:02.530840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.216 [2024-05-16 09:49:02.530891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.216 [2024-05-16 09:49:02.530905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.216 [2024-05-16 09:49:02.530912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.216 [2024-05-16 09:49:02.530918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.216 [2024-05-16 09:49:02.530932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.216 qpair failed and we were unable to recover it. 00:36:09.216 [2024-05-16 09:49:02.540820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.216 [2024-05-16 09:49:02.540908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.216 [2024-05-16 09:49:02.540922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.216 [2024-05-16 09:49:02.540931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.216 [2024-05-16 09:49:02.540938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.216 [2024-05-16 09:49:02.540956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.216 qpair failed and we were unable to recover it. 00:36:09.216 [2024-05-16 09:49:02.550883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.216 [2024-05-16 09:49:02.550928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.216 [2024-05-16 09:49:02.550943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.216 [2024-05-16 09:49:02.550950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.216 [2024-05-16 09:49:02.550956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.216 [2024-05-16 09:49:02.550970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.216 qpair failed and we were unable to recover it. 00:36:09.216 [2024-05-16 09:49:02.560895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.216 [2024-05-16 09:49:02.560949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.217 [2024-05-16 09:49:02.560963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.217 [2024-05-16 09:49:02.560970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.217 [2024-05-16 09:49:02.560976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.217 [2024-05-16 09:49:02.560990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.217 qpair failed and we were unable to recover it. 00:36:09.217 [2024-05-16 09:49:02.570917] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.217 [2024-05-16 09:49:02.570968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.217 [2024-05-16 09:49:02.570983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.217 [2024-05-16 09:49:02.570990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.217 [2024-05-16 09:49:02.570996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.217 [2024-05-16 09:49:02.571010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.217 qpair failed and we were unable to recover it. 00:36:09.217 [2024-05-16 09:49:02.580847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.217 [2024-05-16 09:49:02.580939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.217 [2024-05-16 09:49:02.580955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.217 [2024-05-16 09:49:02.580962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.217 [2024-05-16 09:49:02.580969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.217 [2024-05-16 09:49:02.580984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.217 qpair failed and we were unable to recover it. 00:36:09.217 [2024-05-16 09:49:02.590923] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.217 [2024-05-16 09:49:02.590968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.217 [2024-05-16 09:49:02.590986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.217 [2024-05-16 09:49:02.590993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.217 [2024-05-16 09:49:02.590999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.217 [2024-05-16 09:49:02.591013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.217 qpair failed and we were unable to recover it. 00:36:09.217 [2024-05-16 09:49:02.601020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.217 [2024-05-16 09:49:02.601074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.217 [2024-05-16 09:49:02.601088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.217 [2024-05-16 09:49:02.601095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.217 [2024-05-16 09:49:02.601102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.217 [2024-05-16 09:49:02.601116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.217 qpair failed and we were unable to recover it. 00:36:09.217 [2024-05-16 09:49:02.611057] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.217 [2024-05-16 09:49:02.611114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.217 [2024-05-16 09:49:02.611128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.217 [2024-05-16 09:49:02.611135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.217 [2024-05-16 09:49:02.611142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.217 [2024-05-16 09:49:02.611155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.217 qpair failed and we were unable to recover it. 00:36:09.217 [2024-05-16 09:49:02.621078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.217 [2024-05-16 09:49:02.621134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.217 [2024-05-16 09:49:02.621148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.217 [2024-05-16 09:49:02.621155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.217 [2024-05-16 09:49:02.621161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.217 [2024-05-16 09:49:02.621175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.217 qpair failed and we were unable to recover it. 00:36:09.217 [2024-05-16 09:49:02.631050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.217 [2024-05-16 09:49:02.631106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.217 [2024-05-16 09:49:02.631121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.217 [2024-05-16 09:49:02.631128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.217 [2024-05-16 09:49:02.631134] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.217 [2024-05-16 09:49:02.631155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.217 qpair failed and we were unable to recover it. 00:36:09.217 [2024-05-16 09:49:02.641116] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.217 [2024-05-16 09:49:02.641167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.217 [2024-05-16 09:49:02.641181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.217 [2024-05-16 09:49:02.641189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.217 [2024-05-16 09:49:02.641195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.217 [2024-05-16 09:49:02.641209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.217 qpair failed and we were unable to recover it. 00:36:09.217 [2024-05-16 09:49:02.651164] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.217 [2024-05-16 09:49:02.651219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.217 [2024-05-16 09:49:02.651234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.217 [2024-05-16 09:49:02.651241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.217 [2024-05-16 09:49:02.651247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.217 [2024-05-16 09:49:02.651261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.217 qpair failed and we were unable to recover it. 00:36:09.217 [2024-05-16 09:49:02.661170] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.217 [2024-05-16 09:49:02.661225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.217 [2024-05-16 09:49:02.661239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.217 [2024-05-16 09:49:02.661246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.217 [2024-05-16 09:49:02.661252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.217 [2024-05-16 09:49:02.661266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.217 qpair failed and we were unable to recover it. 00:36:09.217 [2024-05-16 09:49:02.671165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.217 [2024-05-16 09:49:02.671209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.217 [2024-05-16 09:49:02.671222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.217 [2024-05-16 09:49:02.671229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.217 [2024-05-16 09:49:02.671236] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.217 [2024-05-16 09:49:02.671250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.217 qpair failed and we were unable to recover it. 00:36:09.217 [2024-05-16 09:49:02.681233] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.217 [2024-05-16 09:49:02.681289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.217 [2024-05-16 09:49:02.681303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.217 [2024-05-16 09:49:02.681310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.217 [2024-05-16 09:49:02.681316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.217 [2024-05-16 09:49:02.681330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.217 qpair failed and we were unable to recover it. 00:36:09.217 [2024-05-16 09:49:02.691248] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.217 [2024-05-16 09:49:02.691301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.217 [2024-05-16 09:49:02.691315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.217 [2024-05-16 09:49:02.691322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.217 [2024-05-16 09:49:02.691329] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.217 [2024-05-16 09:49:02.691343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.217 qpair failed and we were unable to recover it. 00:36:09.217 [2024-05-16 09:49:02.701267] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.218 [2024-05-16 09:49:02.701319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.218 [2024-05-16 09:49:02.701333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.218 [2024-05-16 09:49:02.701340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.218 [2024-05-16 09:49:02.701346] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.218 [2024-05-16 09:49:02.701360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.218 qpair failed and we were unable to recover it. 00:36:09.218 [2024-05-16 09:49:02.711262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.218 [2024-05-16 09:49:02.711308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.218 [2024-05-16 09:49:02.711322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.218 [2024-05-16 09:49:02.711330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.218 [2024-05-16 09:49:02.711336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.218 [2024-05-16 09:49:02.711349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.218 qpair failed and we were unable to recover it. 00:36:09.218 [2024-05-16 09:49:02.721325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.218 [2024-05-16 09:49:02.721381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.218 [2024-05-16 09:49:02.721395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.218 [2024-05-16 09:49:02.721403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.218 [2024-05-16 09:49:02.721412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.218 [2024-05-16 09:49:02.721426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.218 qpair failed and we were unable to recover it. 00:36:09.218 [2024-05-16 09:49:02.731388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.218 [2024-05-16 09:49:02.731437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.218 [2024-05-16 09:49:02.731452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.218 [2024-05-16 09:49:02.731459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.218 [2024-05-16 09:49:02.731466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.218 [2024-05-16 09:49:02.731480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.218 qpair failed and we were unable to recover it. 00:36:09.218 [2024-05-16 09:49:02.741414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.218 [2024-05-16 09:49:02.741478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.218 [2024-05-16 09:49:02.741492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.218 [2024-05-16 09:49:02.741499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.218 [2024-05-16 09:49:02.741506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.218 [2024-05-16 09:49:02.741521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.218 qpair failed and we were unable to recover it. 00:36:09.218 [2024-05-16 09:49:02.751270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.218 [2024-05-16 09:49:02.751315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.218 [2024-05-16 09:49:02.751329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.218 [2024-05-16 09:49:02.751336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.218 [2024-05-16 09:49:02.751342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.218 [2024-05-16 09:49:02.751356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.218 qpair failed and we were unable to recover it. 00:36:09.218 [2024-05-16 09:49:02.761332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.218 [2024-05-16 09:49:02.761392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.218 [2024-05-16 09:49:02.761406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.218 [2024-05-16 09:49:02.761413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.218 [2024-05-16 09:49:02.761419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.218 [2024-05-16 09:49:02.761433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.218 qpair failed and we were unable to recover it. 00:36:09.218 [2024-05-16 09:49:02.771489] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.218 [2024-05-16 09:49:02.771574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.218 [2024-05-16 09:49:02.771588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.218 [2024-05-16 09:49:02.771596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.218 [2024-05-16 09:49:02.771602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.218 [2024-05-16 09:49:02.771616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.218 qpair failed and we were unable to recover it. 00:36:09.480 [2024-05-16 09:49:02.781524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.480 [2024-05-16 09:49:02.781579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.480 [2024-05-16 09:49:02.781594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.480 [2024-05-16 09:49:02.781601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.481 [2024-05-16 09:49:02.781607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.481 [2024-05-16 09:49:02.781621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.481 qpair failed and we were unable to recover it. 00:36:09.481 [2024-05-16 09:49:02.791501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.481 [2024-05-16 09:49:02.791547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.481 [2024-05-16 09:49:02.791561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.481 [2024-05-16 09:49:02.791569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.481 [2024-05-16 09:49:02.791575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.481 [2024-05-16 09:49:02.791589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.481 qpair failed and we were unable to recover it. 00:36:09.481 [2024-05-16 09:49:02.801570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.481 [2024-05-16 09:49:02.801626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.481 [2024-05-16 09:49:02.801640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.481 [2024-05-16 09:49:02.801647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.481 [2024-05-16 09:49:02.801653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.481 [2024-05-16 09:49:02.801666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.481 qpair failed and we were unable to recover it. 00:36:09.481 [2024-05-16 09:49:02.811604] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.481 [2024-05-16 09:49:02.811653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.481 [2024-05-16 09:49:02.811667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.481 [2024-05-16 09:49:02.811677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.481 [2024-05-16 09:49:02.811684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.481 [2024-05-16 09:49:02.811697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.481 qpair failed and we were unable to recover it. 00:36:09.481 [2024-05-16 09:49:02.821633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.481 [2024-05-16 09:49:02.821691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.481 [2024-05-16 09:49:02.821705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.481 [2024-05-16 09:49:02.821712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.481 [2024-05-16 09:49:02.821719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.481 [2024-05-16 09:49:02.821732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.481 qpair failed and we were unable to recover it. 00:36:09.481 [2024-05-16 09:49:02.831662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.481 [2024-05-16 09:49:02.831745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.481 [2024-05-16 09:49:02.831759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.481 [2024-05-16 09:49:02.831766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.481 [2024-05-16 09:49:02.831772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.481 [2024-05-16 09:49:02.831786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.481 qpair failed and we were unable to recover it. 00:36:09.481 [2024-05-16 09:49:02.841733] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.481 [2024-05-16 09:49:02.841782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.481 [2024-05-16 09:49:02.841797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.481 [2024-05-16 09:49:02.841804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.481 [2024-05-16 09:49:02.841810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.481 [2024-05-16 09:49:02.841824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.481 qpair failed and we were unable to recover it. 00:36:09.481 [2024-05-16 09:49:02.851691] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.481 [2024-05-16 09:49:02.851747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.481 [2024-05-16 09:49:02.851761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.481 [2024-05-16 09:49:02.851768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.481 [2024-05-16 09:49:02.851774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.481 [2024-05-16 09:49:02.851788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.481 qpair failed and we were unable to recover it. 00:36:09.481 [2024-05-16 09:49:02.861744] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.481 [2024-05-16 09:49:02.861797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.481 [2024-05-16 09:49:02.861811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.481 [2024-05-16 09:49:02.861818] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.481 [2024-05-16 09:49:02.861824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.481 [2024-05-16 09:49:02.861837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.481 qpair failed and we were unable to recover it. 00:36:09.481 [2024-05-16 09:49:02.871731] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.481 [2024-05-16 09:49:02.871789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.481 [2024-05-16 09:49:02.871804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.481 [2024-05-16 09:49:02.871811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.481 [2024-05-16 09:49:02.871819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.481 [2024-05-16 09:49:02.871832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.481 qpair failed and we were unable to recover it. 00:36:09.481 [2024-05-16 09:49:02.881810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.481 [2024-05-16 09:49:02.881860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.481 [2024-05-16 09:49:02.881875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.481 [2024-05-16 09:49:02.881882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.481 [2024-05-16 09:49:02.881889] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.481 [2024-05-16 09:49:02.881903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.481 qpair failed and we were unable to recover it. 00:36:09.481 [2024-05-16 09:49:02.891823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.481 [2024-05-16 09:49:02.891874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.481 [2024-05-16 09:49:02.891888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.481 [2024-05-16 09:49:02.891895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.481 [2024-05-16 09:49:02.891902] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.481 [2024-05-16 09:49:02.891915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.481 qpair failed and we were unable to recover it. 00:36:09.481 [2024-05-16 09:49:02.901861] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.481 [2024-05-16 09:49:02.901916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.481 [2024-05-16 09:49:02.901933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.481 [2024-05-16 09:49:02.901941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.481 [2024-05-16 09:49:02.901947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.481 [2024-05-16 09:49:02.901961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.481 qpair failed and we were unable to recover it. 00:36:09.481 [2024-05-16 09:49:02.911743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.481 [2024-05-16 09:49:02.911794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.481 [2024-05-16 09:49:02.911809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.481 [2024-05-16 09:49:02.911816] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.481 [2024-05-16 09:49:02.911822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.481 [2024-05-16 09:49:02.911836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.481 qpair failed and we were unable to recover it. 00:36:09.481 [2024-05-16 09:49:02.921876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.481 [2024-05-16 09:49:02.921940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.482 [2024-05-16 09:49:02.921954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.482 [2024-05-16 09:49:02.921962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.482 [2024-05-16 09:49:02.921968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.482 [2024-05-16 09:49:02.921982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.482 qpair failed and we were unable to recover it. 00:36:09.482 [2024-05-16 09:49:02.931947] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.482 [2024-05-16 09:49:02.932001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.482 [2024-05-16 09:49:02.932015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.482 [2024-05-16 09:49:02.932023] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.482 [2024-05-16 09:49:02.932029] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.482 [2024-05-16 09:49:02.932042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.482 qpair failed and we were unable to recover it. 00:36:09.482 [2024-05-16 09:49:02.941927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.482 [2024-05-16 09:49:02.941986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.482 [2024-05-16 09:49:02.942000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.482 [2024-05-16 09:49:02.942007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.482 [2024-05-16 09:49:02.942013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.482 [2024-05-16 09:49:02.942030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.482 qpair failed and we were unable to recover it. 00:36:09.482 [2024-05-16 09:49:02.951973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.482 [2024-05-16 09:49:02.952021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.482 [2024-05-16 09:49:02.952035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.482 [2024-05-16 09:49:02.952042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.482 [2024-05-16 09:49:02.952048] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.482 [2024-05-16 09:49:02.952066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.482 qpair failed and we were unable to recover it. 00:36:09.482 [2024-05-16 09:49:02.961978] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.482 [2024-05-16 09:49:02.962029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.482 [2024-05-16 09:49:02.962043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.482 [2024-05-16 09:49:02.962050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.482 [2024-05-16 09:49:02.962060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.482 [2024-05-16 09:49:02.962073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.482 qpair failed and we were unable to recover it. 00:36:09.482 [2024-05-16 09:49:02.972022] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.482 [2024-05-16 09:49:02.972073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.482 [2024-05-16 09:49:02.972087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.482 [2024-05-16 09:49:02.972094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.482 [2024-05-16 09:49:02.972100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.482 [2024-05-16 09:49:02.972114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.482 qpair failed and we were unable to recover it. 00:36:09.482 [2024-05-16 09:49:02.982069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.482 [2024-05-16 09:49:02.982124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.482 [2024-05-16 09:49:02.982138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.482 [2024-05-16 09:49:02.982145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.482 [2024-05-16 09:49:02.982151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.482 [2024-05-16 09:49:02.982164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.482 qpair failed and we were unable to recover it. 00:36:09.482 [2024-05-16 09:49:02.992067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.482 [2024-05-16 09:49:02.992114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.482 [2024-05-16 09:49:02.992131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.482 [2024-05-16 09:49:02.992138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.482 [2024-05-16 09:49:02.992144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.482 [2024-05-16 09:49:02.992158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.482 qpair failed and we were unable to recover it. 00:36:09.482 [2024-05-16 09:49:03.002131] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.482 [2024-05-16 09:49:03.002179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.482 [2024-05-16 09:49:03.002193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.482 [2024-05-16 09:49:03.002201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.482 [2024-05-16 09:49:03.002207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.482 [2024-05-16 09:49:03.002221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.482 qpair failed and we were unable to recover it. 00:36:09.482 [2024-05-16 09:49:03.012166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.482 [2024-05-16 09:49:03.012215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.482 [2024-05-16 09:49:03.012230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.482 [2024-05-16 09:49:03.012237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.482 [2024-05-16 09:49:03.012243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.482 [2024-05-16 09:49:03.012256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.482 qpair failed and we were unable to recover it. 00:36:09.482 [2024-05-16 09:49:03.022183] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.482 [2024-05-16 09:49:03.022240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.482 [2024-05-16 09:49:03.022254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.482 [2024-05-16 09:49:03.022261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.482 [2024-05-16 09:49:03.022267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.482 [2024-05-16 09:49:03.022281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.482 qpair failed and we were unable to recover it. 00:36:09.482 [2024-05-16 09:49:03.032151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.482 [2024-05-16 09:49:03.032193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.482 [2024-05-16 09:49:03.032207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.482 [2024-05-16 09:49:03.032214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.482 [2024-05-16 09:49:03.032220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.482 [2024-05-16 09:49:03.032238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.482 qpair failed and we were unable to recover it. 00:36:09.744 [2024-05-16 09:49:03.042225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.744 [2024-05-16 09:49:03.042283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.744 [2024-05-16 09:49:03.042297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.744 [2024-05-16 09:49:03.042304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.744 [2024-05-16 09:49:03.042310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.744 [2024-05-16 09:49:03.042324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-05-16 09:49:03.052279] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.744 [2024-05-16 09:49:03.052331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.744 [2024-05-16 09:49:03.052345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.744 [2024-05-16 09:49:03.052352] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.744 [2024-05-16 09:49:03.052358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.744 [2024-05-16 09:49:03.052372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-05-16 09:49:03.062261] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.744 [2024-05-16 09:49:03.062322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.744 [2024-05-16 09:49:03.062336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.744 [2024-05-16 09:49:03.062343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.744 [2024-05-16 09:49:03.062349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.744 [2024-05-16 09:49:03.062363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-05-16 09:49:03.072294] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.744 [2024-05-16 09:49:03.072369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.744 [2024-05-16 09:49:03.072383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.744 [2024-05-16 09:49:03.072390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.744 [2024-05-16 09:49:03.072396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.744 [2024-05-16 09:49:03.072409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-05-16 09:49:03.082330] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.744 [2024-05-16 09:49:03.082378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.744 [2024-05-16 09:49:03.082396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.744 [2024-05-16 09:49:03.082403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.744 [2024-05-16 09:49:03.082409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.745 [2024-05-16 09:49:03.082422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-05-16 09:49:03.092391] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.745 [2024-05-16 09:49:03.092442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.745 [2024-05-16 09:49:03.092456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.745 [2024-05-16 09:49:03.092464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.745 [2024-05-16 09:49:03.092470] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.745 [2024-05-16 09:49:03.092483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-05-16 09:49:03.102419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.745 [2024-05-16 09:49:03.102474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.745 [2024-05-16 09:49:03.102488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.745 [2024-05-16 09:49:03.102495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.745 [2024-05-16 09:49:03.102501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.745 [2024-05-16 09:49:03.102515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-05-16 09:49:03.112393] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.745 [2024-05-16 09:49:03.112435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.745 [2024-05-16 09:49:03.112449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.745 [2024-05-16 09:49:03.112456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.745 [2024-05-16 09:49:03.112463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.745 [2024-05-16 09:49:03.112476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-05-16 09:49:03.122469] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.745 [2024-05-16 09:49:03.122519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.745 [2024-05-16 09:49:03.122533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.745 [2024-05-16 09:49:03.122540] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.745 [2024-05-16 09:49:03.122550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.745 [2024-05-16 09:49:03.122563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-05-16 09:49:03.132497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.745 [2024-05-16 09:49:03.132550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.745 [2024-05-16 09:49:03.132565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.745 [2024-05-16 09:49:03.132572] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.745 [2024-05-16 09:49:03.132578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.745 [2024-05-16 09:49:03.132592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-05-16 09:49:03.142507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.745 [2024-05-16 09:49:03.142561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.745 [2024-05-16 09:49:03.142574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.745 [2024-05-16 09:49:03.142581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.745 [2024-05-16 09:49:03.142588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.745 [2024-05-16 09:49:03.142601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-05-16 09:49:03.152509] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.745 [2024-05-16 09:49:03.152557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.745 [2024-05-16 09:49:03.152571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.745 [2024-05-16 09:49:03.152579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.745 [2024-05-16 09:49:03.152587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.745 [2024-05-16 09:49:03.152601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-05-16 09:49:03.162569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.745 [2024-05-16 09:49:03.162671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.745 [2024-05-16 09:49:03.162686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.745 [2024-05-16 09:49:03.162693] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.745 [2024-05-16 09:49:03.162700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.745 [2024-05-16 09:49:03.162713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-05-16 09:49:03.172606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.745 [2024-05-16 09:49:03.172667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.745 [2024-05-16 09:49:03.172681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.745 [2024-05-16 09:49:03.172688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.745 [2024-05-16 09:49:03.172695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.745 [2024-05-16 09:49:03.172708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-05-16 09:49:03.182620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.745 [2024-05-16 09:49:03.182676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.745 [2024-05-16 09:49:03.182689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.745 [2024-05-16 09:49:03.182696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.745 [2024-05-16 09:49:03.182702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.745 [2024-05-16 09:49:03.182716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-05-16 09:49:03.192619] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.745 [2024-05-16 09:49:03.192680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.745 [2024-05-16 09:49:03.192694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.745 [2024-05-16 09:49:03.192701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.745 [2024-05-16 09:49:03.192707] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.745 [2024-05-16 09:49:03.192721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-05-16 09:49:03.202676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.745 [2024-05-16 09:49:03.202728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.745 [2024-05-16 09:49:03.202742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.745 [2024-05-16 09:49:03.202749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.745 [2024-05-16 09:49:03.202755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.745 [2024-05-16 09:49:03.202768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-05-16 09:49:03.212725] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.745 [2024-05-16 09:49:03.212782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.745 [2024-05-16 09:49:03.212796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.745 [2024-05-16 09:49:03.212807] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.745 [2024-05-16 09:49:03.212813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.745 [2024-05-16 09:49:03.212827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-05-16 09:49:03.222775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.745 [2024-05-16 09:49:03.222854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.746 [2024-05-16 09:49:03.222878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.746 [2024-05-16 09:49:03.222887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.746 [2024-05-16 09:49:03.222895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.746 [2024-05-16 09:49:03.222913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-05-16 09:49:03.232719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.746 [2024-05-16 09:49:03.232767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.746 [2024-05-16 09:49:03.232784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.746 [2024-05-16 09:49:03.232791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.746 [2024-05-16 09:49:03.232798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.746 [2024-05-16 09:49:03.232812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-05-16 09:49:03.242783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.746 [2024-05-16 09:49:03.242841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.746 [2024-05-16 09:49:03.242855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.746 [2024-05-16 09:49:03.242863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.746 [2024-05-16 09:49:03.242869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.746 [2024-05-16 09:49:03.242884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-05-16 09:49:03.252704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.746 [2024-05-16 09:49:03.252770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.746 [2024-05-16 09:49:03.252784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.746 [2024-05-16 09:49:03.252791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.746 [2024-05-16 09:49:03.252797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.746 [2024-05-16 09:49:03.252811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-05-16 09:49:03.262743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.746 [2024-05-16 09:49:03.262801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.746 [2024-05-16 09:49:03.262815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.746 [2024-05-16 09:49:03.262822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.746 [2024-05-16 09:49:03.262829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.746 [2024-05-16 09:49:03.262843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-05-16 09:49:03.272821] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.746 [2024-05-16 09:49:03.272870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.746 [2024-05-16 09:49:03.272884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.746 [2024-05-16 09:49:03.272891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.746 [2024-05-16 09:49:03.272898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.746 [2024-05-16 09:49:03.272911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-05-16 09:49:03.282908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.746 [2024-05-16 09:49:03.282999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.746 [2024-05-16 09:49:03.283023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.746 [2024-05-16 09:49:03.283032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.746 [2024-05-16 09:49:03.283039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.746 [2024-05-16 09:49:03.283063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-05-16 09:49:03.292947] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.746 [2024-05-16 09:49:03.293005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.746 [2024-05-16 09:49:03.293021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.746 [2024-05-16 09:49:03.293028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.746 [2024-05-16 09:49:03.293034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:09.746 [2024-05-16 09:49:03.293050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.746 qpair failed and we were unable to recover it. 00:36:10.008 [2024-05-16 09:49:03.302838] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.008 [2024-05-16 09:49:03.302899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.008 [2024-05-16 09:49:03.302914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.008 [2024-05-16 09:49:03.302926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.008 [2024-05-16 09:49:03.302933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.008 [2024-05-16 09:49:03.302947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.008 qpair failed and we were unable to recover it. 00:36:10.008 [2024-05-16 09:49:03.312830] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.008 [2024-05-16 09:49:03.312874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.008 [2024-05-16 09:49:03.312889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.008 [2024-05-16 09:49:03.312896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.008 [2024-05-16 09:49:03.312902] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.008 [2024-05-16 09:49:03.312916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.008 qpair failed and we were unable to recover it. 00:36:10.008 [2024-05-16 09:49:03.322891] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.008 [2024-05-16 09:49:03.322937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.008 [2024-05-16 09:49:03.322952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.008 [2024-05-16 09:49:03.322959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.008 [2024-05-16 09:49:03.322965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.008 [2024-05-16 09:49:03.322979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.008 qpair failed and we were unable to recover it. 00:36:10.008 [2024-05-16 09:49:03.333014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.008 [2024-05-16 09:49:03.333072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.008 [2024-05-16 09:49:03.333087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.008 [2024-05-16 09:49:03.333094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.008 [2024-05-16 09:49:03.333101] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.008 [2024-05-16 09:49:03.333115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.008 qpair failed and we were unable to recover it. 00:36:10.008 [2024-05-16 09:49:03.342912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.008 [2024-05-16 09:49:03.342968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.008 [2024-05-16 09:49:03.342982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.008 [2024-05-16 09:49:03.342989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.008 [2024-05-16 09:49:03.342995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.008 [2024-05-16 09:49:03.343009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.008 qpair failed and we were unable to recover it. 00:36:10.008 [2024-05-16 09:49:03.352934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.008 [2024-05-16 09:49:03.352989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.008 [2024-05-16 09:49:03.353004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.008 [2024-05-16 09:49:03.353011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.008 [2024-05-16 09:49:03.353018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.008 [2024-05-16 09:49:03.353036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.008 qpair failed and we were unable to recover it. 00:36:10.008 [2024-05-16 09:49:03.363114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.008 [2024-05-16 09:49:03.363167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.008 [2024-05-16 09:49:03.363181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.008 [2024-05-16 09:49:03.363189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.008 [2024-05-16 09:49:03.363195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.008 [2024-05-16 09:49:03.363209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.008 qpair failed and we were unable to recover it. 00:36:10.008 [2024-05-16 09:49:03.373114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.008 [2024-05-16 09:49:03.373167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.008 [2024-05-16 09:49:03.373181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.008 [2024-05-16 09:49:03.373188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.008 [2024-05-16 09:49:03.373194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.008 [2024-05-16 09:49:03.373208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.008 qpair failed and we were unable to recover it. 00:36:10.008 [2024-05-16 09:49:03.383139] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.008 [2024-05-16 09:49:03.383199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.008 [2024-05-16 09:49:03.383213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.008 [2024-05-16 09:49:03.383220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.008 [2024-05-16 09:49:03.383226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.008 [2024-05-16 09:49:03.383240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.008 qpair failed and we were unable to recover it. 00:36:10.008 [2024-05-16 09:49:03.393170] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.008 [2024-05-16 09:49:03.393221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.008 [2024-05-16 09:49:03.393242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.008 [2024-05-16 09:49:03.393249] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.008 [2024-05-16 09:49:03.393255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.008 [2024-05-16 09:49:03.393269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.008 qpair failed and we were unable to recover it. 00:36:10.008 [2024-05-16 09:49:03.403230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.008 [2024-05-16 09:49:03.403275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.008 [2024-05-16 09:49:03.403289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.008 [2024-05-16 09:49:03.403297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.008 [2024-05-16 09:49:03.403303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.008 [2024-05-16 09:49:03.403316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.008 qpair failed and we were unable to recover it. 00:36:10.008 [2024-05-16 09:49:03.413283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.008 [2024-05-16 09:49:03.413334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.009 [2024-05-16 09:49:03.413348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.009 [2024-05-16 09:49:03.413355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.009 [2024-05-16 09:49:03.413361] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.009 [2024-05-16 09:49:03.413375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.009 qpair failed and we were unable to recover it. 00:36:10.009 [2024-05-16 09:49:03.423273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.009 [2024-05-16 09:49:03.423329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.009 [2024-05-16 09:49:03.423344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.009 [2024-05-16 09:49:03.423351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.009 [2024-05-16 09:49:03.423357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.009 [2024-05-16 09:49:03.423370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.009 qpair failed and we were unable to recover it. 00:36:10.009 [2024-05-16 09:49:03.433261] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.009 [2024-05-16 09:49:03.433304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.009 [2024-05-16 09:49:03.433318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.009 [2024-05-16 09:49:03.433325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.009 [2024-05-16 09:49:03.433332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.009 [2024-05-16 09:49:03.433348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.009 qpair failed and we were unable to recover it. 00:36:10.009 [2024-05-16 09:49:03.443346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.009 [2024-05-16 09:49:03.443399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.009 [2024-05-16 09:49:03.443413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.009 [2024-05-16 09:49:03.443420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.009 [2024-05-16 09:49:03.443426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.009 [2024-05-16 09:49:03.443440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.009 qpair failed and we were unable to recover it. 00:36:10.009 [2024-05-16 09:49:03.453364] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.009 [2024-05-16 09:49:03.453413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.009 [2024-05-16 09:49:03.453427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.009 [2024-05-16 09:49:03.453435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.009 [2024-05-16 09:49:03.453441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.009 [2024-05-16 09:49:03.453454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.009 qpair failed and we were unable to recover it. 00:36:10.009 [2024-05-16 09:49:03.463355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.009 [2024-05-16 09:49:03.463442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.009 [2024-05-16 09:49:03.463455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.009 [2024-05-16 09:49:03.463462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.009 [2024-05-16 09:49:03.463468] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.009 [2024-05-16 09:49:03.463482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.009 qpair failed and we were unable to recover it. 00:36:10.009 [2024-05-16 09:49:03.473368] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.009 [2024-05-16 09:49:03.473428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.009 [2024-05-16 09:49:03.473443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.009 [2024-05-16 09:49:03.473450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.009 [2024-05-16 09:49:03.473456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.009 [2024-05-16 09:49:03.473469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.009 qpair failed and we were unable to recover it. 00:36:10.009 [2024-05-16 09:49:03.483440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.009 [2024-05-16 09:49:03.483494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.009 [2024-05-16 09:49:03.483515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.009 [2024-05-16 09:49:03.483522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.009 [2024-05-16 09:49:03.483528] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.009 [2024-05-16 09:49:03.483542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.009 qpair failed and we were unable to recover it. 00:36:10.009 [2024-05-16 09:49:03.493480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.009 [2024-05-16 09:49:03.493531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.009 [2024-05-16 09:49:03.493545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.009 [2024-05-16 09:49:03.493552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.009 [2024-05-16 09:49:03.493558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.009 [2024-05-16 09:49:03.493572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.009 qpair failed and we were unable to recover it. 00:36:10.009 [2024-05-16 09:49:03.503430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.009 [2024-05-16 09:49:03.503481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.009 [2024-05-16 09:49:03.503495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.009 [2024-05-16 09:49:03.503503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.009 [2024-05-16 09:49:03.503509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.009 [2024-05-16 09:49:03.503522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.009 qpair failed and we were unable to recover it. 00:36:10.009 [2024-05-16 09:49:03.513478] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.009 [2024-05-16 09:49:03.513521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.009 [2024-05-16 09:49:03.513536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.009 [2024-05-16 09:49:03.513543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.009 [2024-05-16 09:49:03.513549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.009 [2024-05-16 09:49:03.513563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.009 qpair failed and we were unable to recover it. 00:36:10.009 [2024-05-16 09:49:03.523449] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.009 [2024-05-16 09:49:03.523500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.009 [2024-05-16 09:49:03.523515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.009 [2024-05-16 09:49:03.523522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.009 [2024-05-16 09:49:03.523531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.009 [2024-05-16 09:49:03.523545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.009 qpair failed and we were unable to recover it. 00:36:10.009 [2024-05-16 09:49:03.533570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.009 [2024-05-16 09:49:03.533667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.009 [2024-05-16 09:49:03.533682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.009 [2024-05-16 09:49:03.533689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.009 [2024-05-16 09:49:03.533695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.009 [2024-05-16 09:49:03.533709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.009 qpair failed and we were unable to recover it. 00:36:10.009 [2024-05-16 09:49:03.543548] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.009 [2024-05-16 09:49:03.543600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.009 [2024-05-16 09:49:03.543614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.009 [2024-05-16 09:49:03.543621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.009 [2024-05-16 09:49:03.543628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.010 [2024-05-16 09:49:03.543641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.010 qpair failed and we were unable to recover it. 00:36:10.010 [2024-05-16 09:49:03.553578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.010 [2024-05-16 09:49:03.553626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.010 [2024-05-16 09:49:03.553640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.010 [2024-05-16 09:49:03.553647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.010 [2024-05-16 09:49:03.553654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.010 [2024-05-16 09:49:03.553667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.010 qpair failed and we were unable to recover it. 00:36:10.010 [2024-05-16 09:49:03.563625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.010 [2024-05-16 09:49:03.563671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.010 [2024-05-16 09:49:03.563685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.010 [2024-05-16 09:49:03.563692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.010 [2024-05-16 09:49:03.563698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.010 [2024-05-16 09:49:03.563711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.010 qpair failed and we were unable to recover it. 00:36:10.271 [2024-05-16 09:49:03.573685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.271 [2024-05-16 09:49:03.573752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.271 [2024-05-16 09:49:03.573767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.271 [2024-05-16 09:49:03.573774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.271 [2024-05-16 09:49:03.573780] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.271 [2024-05-16 09:49:03.573793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.271 qpair failed and we were unable to recover it. 00:36:10.271 [2024-05-16 09:49:03.583689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.271 [2024-05-16 09:49:03.583738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.272 [2024-05-16 09:49:03.583753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.272 [2024-05-16 09:49:03.583760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.272 [2024-05-16 09:49:03.583766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.272 [2024-05-16 09:49:03.583779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.272 qpair failed and we were unable to recover it. 00:36:10.272 [2024-05-16 09:49:03.593699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.272 [2024-05-16 09:49:03.593746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.272 [2024-05-16 09:49:03.593760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.272 [2024-05-16 09:49:03.593767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.272 [2024-05-16 09:49:03.593773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.272 [2024-05-16 09:49:03.593787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.272 qpair failed and we were unable to recover it. 00:36:10.272 [2024-05-16 09:49:03.603726] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.272 [2024-05-16 09:49:03.603813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.272 [2024-05-16 09:49:03.603826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.272 [2024-05-16 09:49:03.603833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.272 [2024-05-16 09:49:03.603840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.272 [2024-05-16 09:49:03.603853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.272 qpair failed and we were unable to recover it. 00:36:10.272 [2024-05-16 09:49:03.613799] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.272 [2024-05-16 09:49:03.613888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.272 [2024-05-16 09:49:03.613903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.272 [2024-05-16 09:49:03.613914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.272 [2024-05-16 09:49:03.613921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.272 [2024-05-16 09:49:03.613938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.272 qpair failed and we were unable to recover it. 00:36:10.272 [2024-05-16 09:49:03.623776] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.272 [2024-05-16 09:49:03.623828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.272 [2024-05-16 09:49:03.623842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.272 [2024-05-16 09:49:03.623849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.272 [2024-05-16 09:49:03.623855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.272 [2024-05-16 09:49:03.623869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.272 qpair failed and we were unable to recover it. 00:36:10.272 [2024-05-16 09:49:03.633817] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.272 [2024-05-16 09:49:03.633866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.272 [2024-05-16 09:49:03.633880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.272 [2024-05-16 09:49:03.633887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.272 [2024-05-16 09:49:03.633894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.272 [2024-05-16 09:49:03.633907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.272 qpair failed and we were unable to recover it. 00:36:10.272 [2024-05-16 09:49:03.643853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.272 [2024-05-16 09:49:03.643906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.272 [2024-05-16 09:49:03.643920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.272 [2024-05-16 09:49:03.643927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.272 [2024-05-16 09:49:03.643933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.272 [2024-05-16 09:49:03.643946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.272 qpair failed and we were unable to recover it. 00:36:10.272 [2024-05-16 09:49:03.653895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.272 [2024-05-16 09:49:03.653947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.272 [2024-05-16 09:49:03.653961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.272 [2024-05-16 09:49:03.653968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.272 [2024-05-16 09:49:03.653975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.272 [2024-05-16 09:49:03.653988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.272 qpair failed and we were unable to recover it. 00:36:10.272 [2024-05-16 09:49:03.663890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.272 [2024-05-16 09:49:03.663941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.272 [2024-05-16 09:49:03.663955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.272 [2024-05-16 09:49:03.663962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.272 [2024-05-16 09:49:03.663969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.272 [2024-05-16 09:49:03.663982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.272 qpair failed and we were unable to recover it. 00:36:10.272 [2024-05-16 09:49:03.673909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.272 [2024-05-16 09:49:03.673955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.272 [2024-05-16 09:49:03.673969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.272 [2024-05-16 09:49:03.673976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.272 [2024-05-16 09:49:03.673982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.272 [2024-05-16 09:49:03.673996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.272 qpair failed and we were unable to recover it. 00:36:10.272 [2024-05-16 09:49:03.683945] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.272 [2024-05-16 09:49:03.683993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.272 [2024-05-16 09:49:03.684007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.272 [2024-05-16 09:49:03.684015] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.272 [2024-05-16 09:49:03.684021] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.272 [2024-05-16 09:49:03.684034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.272 qpair failed and we were unable to recover it. 00:36:10.272 [2024-05-16 09:49:03.694020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.272 [2024-05-16 09:49:03.694093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.272 [2024-05-16 09:49:03.694108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.272 [2024-05-16 09:49:03.694115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.272 [2024-05-16 09:49:03.694121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.272 [2024-05-16 09:49:03.694136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.272 qpair failed and we were unable to recover it. 00:36:10.272 [2024-05-16 09:49:03.703964] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.272 [2024-05-16 09:49:03.704017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.272 [2024-05-16 09:49:03.704031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.272 [2024-05-16 09:49:03.704041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.272 [2024-05-16 09:49:03.704047] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.272 [2024-05-16 09:49:03.704065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.272 qpair failed and we were unable to recover it. 00:36:10.272 [2024-05-16 09:49:03.714016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.272 [2024-05-16 09:49:03.714105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.272 [2024-05-16 09:49:03.714119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.272 [2024-05-16 09:49:03.714126] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.272 [2024-05-16 09:49:03.714132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.273 [2024-05-16 09:49:03.714146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.273 qpair failed and we were unable to recover it. 00:36:10.273 [2024-05-16 09:49:03.724023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.273 [2024-05-16 09:49:03.724073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.273 [2024-05-16 09:49:03.724087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.273 [2024-05-16 09:49:03.724094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.273 [2024-05-16 09:49:03.724100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.273 [2024-05-16 09:49:03.724114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.273 qpair failed and we were unable to recover it. 00:36:10.273 [2024-05-16 09:49:03.734113] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.273 [2024-05-16 09:49:03.734203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.273 [2024-05-16 09:49:03.734217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.273 [2024-05-16 09:49:03.734226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.273 [2024-05-16 09:49:03.734232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.273 [2024-05-16 09:49:03.734246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.273 qpair failed and we were unable to recover it. 00:36:10.273 [2024-05-16 09:49:03.744098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.273 [2024-05-16 09:49:03.744149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.273 [2024-05-16 09:49:03.744163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.273 [2024-05-16 09:49:03.744170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.273 [2024-05-16 09:49:03.744177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.273 [2024-05-16 09:49:03.744190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.273 qpair failed and we were unable to recover it. 00:36:10.273 [2024-05-16 09:49:03.754126] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.273 [2024-05-16 09:49:03.754229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.273 [2024-05-16 09:49:03.754243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.273 [2024-05-16 09:49:03.754250] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.273 [2024-05-16 09:49:03.754257] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.273 [2024-05-16 09:49:03.754271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.273 qpair failed and we were unable to recover it. 00:36:10.273 [2024-05-16 09:49:03.764146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.273 [2024-05-16 09:49:03.764193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.273 [2024-05-16 09:49:03.764207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.273 [2024-05-16 09:49:03.764214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.273 [2024-05-16 09:49:03.764220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.273 [2024-05-16 09:49:03.764234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.273 qpair failed and we were unable to recover it. 00:36:10.273 [2024-05-16 09:49:03.774223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.273 [2024-05-16 09:49:03.774275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.273 [2024-05-16 09:49:03.774289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.273 [2024-05-16 09:49:03.774296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.273 [2024-05-16 09:49:03.774303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.273 [2024-05-16 09:49:03.774316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.273 qpair failed and we were unable to recover it. 00:36:10.273 [2024-05-16 09:49:03.784215] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.273 [2024-05-16 09:49:03.784266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.273 [2024-05-16 09:49:03.784280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.273 [2024-05-16 09:49:03.784287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.273 [2024-05-16 09:49:03.784293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.273 [2024-05-16 09:49:03.784307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.273 qpair failed and we were unable to recover it. 00:36:10.273 [2024-05-16 09:49:03.794248] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.273 [2024-05-16 09:49:03.794297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.273 [2024-05-16 09:49:03.794314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.273 [2024-05-16 09:49:03.794321] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.273 [2024-05-16 09:49:03.794328] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.273 [2024-05-16 09:49:03.794342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.273 qpair failed and we were unable to recover it. 00:36:10.273 [2024-05-16 09:49:03.804270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.273 [2024-05-16 09:49:03.804320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.273 [2024-05-16 09:49:03.804335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.273 [2024-05-16 09:49:03.804342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.273 [2024-05-16 09:49:03.804348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.273 [2024-05-16 09:49:03.804362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.273 qpair failed and we were unable to recover it. 00:36:10.273 [2024-05-16 09:49:03.814317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.273 [2024-05-16 09:49:03.814372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.273 [2024-05-16 09:49:03.814386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.273 [2024-05-16 09:49:03.814393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.273 [2024-05-16 09:49:03.814400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.273 [2024-05-16 09:49:03.814413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.273 qpair failed and we were unable to recover it. 00:36:10.273 [2024-05-16 09:49:03.824308] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.273 [2024-05-16 09:49:03.824401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.273 [2024-05-16 09:49:03.824416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.273 [2024-05-16 09:49:03.824423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.273 [2024-05-16 09:49:03.824432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.273 [2024-05-16 09:49:03.824446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.273 qpair failed and we were unable to recover it. 00:36:10.535 [2024-05-16 09:49:03.834338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.535 [2024-05-16 09:49:03.834387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.535 [2024-05-16 09:49:03.834403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.535 [2024-05-16 09:49:03.834411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.535 [2024-05-16 09:49:03.834418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.535 [2024-05-16 09:49:03.834435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.535 qpair failed and we were unable to recover it. 00:36:10.535 [2024-05-16 09:49:03.844345] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.535 [2024-05-16 09:49:03.844389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.535 [2024-05-16 09:49:03.844402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.535 [2024-05-16 09:49:03.844409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.535 [2024-05-16 09:49:03.844416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.535 [2024-05-16 09:49:03.844429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.535 qpair failed and we were unable to recover it. 00:36:10.535 [2024-05-16 09:49:03.854433] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.535 [2024-05-16 09:49:03.854488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.535 [2024-05-16 09:49:03.854502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.535 [2024-05-16 09:49:03.854509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.535 [2024-05-16 09:49:03.854516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.535 [2024-05-16 09:49:03.854530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.535 qpair failed and we were unable to recover it. 00:36:10.535 [2024-05-16 09:49:03.864414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.535 [2024-05-16 09:49:03.864466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.535 [2024-05-16 09:49:03.864481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.535 [2024-05-16 09:49:03.864488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.535 [2024-05-16 09:49:03.864494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.535 [2024-05-16 09:49:03.864507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.535 qpair failed and we were unable to recover it. 00:36:10.535 [2024-05-16 09:49:03.874487] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.535 [2024-05-16 09:49:03.874542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.535 [2024-05-16 09:49:03.874556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.535 [2024-05-16 09:49:03.874563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.535 [2024-05-16 09:49:03.874569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.535 [2024-05-16 09:49:03.874583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.535 qpair failed and we were unable to recover it. 00:36:10.535 [2024-05-16 09:49:03.884502] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.535 [2024-05-16 09:49:03.884553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.535 [2024-05-16 09:49:03.884570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.535 [2024-05-16 09:49:03.884577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.535 [2024-05-16 09:49:03.884583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.535 [2024-05-16 09:49:03.884597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.536 qpair failed and we were unable to recover it. 00:36:10.536 [2024-05-16 09:49:03.894583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.536 [2024-05-16 09:49:03.894635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.536 [2024-05-16 09:49:03.894649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.536 [2024-05-16 09:49:03.894656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.536 [2024-05-16 09:49:03.894662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.536 [2024-05-16 09:49:03.894676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.536 qpair failed and we were unable to recover it. 00:36:10.536 [2024-05-16 09:49:03.904535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.536 [2024-05-16 09:49:03.904592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.536 [2024-05-16 09:49:03.904606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.536 [2024-05-16 09:49:03.904613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.536 [2024-05-16 09:49:03.904619] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.536 [2024-05-16 09:49:03.904632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.536 qpair failed and we were unable to recover it. 00:36:10.536 [2024-05-16 09:49:03.914555] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.536 [2024-05-16 09:49:03.914598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.536 [2024-05-16 09:49:03.914611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.536 [2024-05-16 09:49:03.914618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.536 [2024-05-16 09:49:03.914625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.536 [2024-05-16 09:49:03.914638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.536 qpair failed and we were unable to recover it. 00:36:10.536 [2024-05-16 09:49:03.924450] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.536 [2024-05-16 09:49:03.924496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.536 [2024-05-16 09:49:03.924510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.536 [2024-05-16 09:49:03.924518] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.536 [2024-05-16 09:49:03.924528] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.536 [2024-05-16 09:49:03.924541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.536 qpair failed and we were unable to recover it. 00:36:10.536 [2024-05-16 09:49:03.934632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.536 [2024-05-16 09:49:03.934685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.536 [2024-05-16 09:49:03.934700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.536 [2024-05-16 09:49:03.934707] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.536 [2024-05-16 09:49:03.934713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.536 [2024-05-16 09:49:03.934727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.536 qpair failed and we were unable to recover it. 00:36:10.536 [2024-05-16 09:49:03.944591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.536 [2024-05-16 09:49:03.944658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.536 [2024-05-16 09:49:03.944673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.536 [2024-05-16 09:49:03.944680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.536 [2024-05-16 09:49:03.944686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.536 [2024-05-16 09:49:03.944701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.536 qpair failed and we were unable to recover it. 00:36:10.536 [2024-05-16 09:49:03.954528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.536 [2024-05-16 09:49:03.954576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.536 [2024-05-16 09:49:03.954590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.536 [2024-05-16 09:49:03.954597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.536 [2024-05-16 09:49:03.954603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.536 [2024-05-16 09:49:03.954618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.536 qpair failed and we were unable to recover it. 00:36:10.536 [2024-05-16 09:49:03.964558] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.536 [2024-05-16 09:49:03.964608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.536 [2024-05-16 09:49:03.964623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.536 [2024-05-16 09:49:03.964630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.536 [2024-05-16 09:49:03.964636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.536 [2024-05-16 09:49:03.964649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.536 qpair failed and we were unable to recover it. 00:36:10.536 [2024-05-16 09:49:03.974626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.536 [2024-05-16 09:49:03.974681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.536 [2024-05-16 09:49:03.974695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.536 [2024-05-16 09:49:03.974702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.536 [2024-05-16 09:49:03.974708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.536 [2024-05-16 09:49:03.974722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.536 qpair failed and we were unable to recover it. 00:36:10.536 [2024-05-16 09:49:03.984732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.536 [2024-05-16 09:49:03.984791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.536 [2024-05-16 09:49:03.984806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.536 [2024-05-16 09:49:03.984813] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.536 [2024-05-16 09:49:03.984819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.536 [2024-05-16 09:49:03.984833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.536 qpair failed and we were unable to recover it. 00:36:10.536 [2024-05-16 09:49:03.994747] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.536 [2024-05-16 09:49:03.994791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.536 [2024-05-16 09:49:03.994805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.536 [2024-05-16 09:49:03.994813] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.536 [2024-05-16 09:49:03.994819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.536 [2024-05-16 09:49:03.994832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.536 qpair failed and we were unable to recover it. 00:36:10.536 [2024-05-16 09:49:04.004795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.536 [2024-05-16 09:49:04.004855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.536 [2024-05-16 09:49:04.004869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.536 [2024-05-16 09:49:04.004877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.536 [2024-05-16 09:49:04.004883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.536 [2024-05-16 09:49:04.004896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.536 qpair failed and we were unable to recover it. 00:36:10.536 [2024-05-16 09:49:04.014741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.537 [2024-05-16 09:49:04.014802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.537 [2024-05-16 09:49:04.014817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.537 [2024-05-16 09:49:04.014825] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.537 [2024-05-16 09:49:04.014835] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.537 [2024-05-16 09:49:04.014851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.537 qpair failed and we were unable to recover it. 00:36:10.537 [2024-05-16 09:49:04.024869] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.537 [2024-05-16 09:49:04.024964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.537 [2024-05-16 09:49:04.024980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.537 [2024-05-16 09:49:04.024987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.537 [2024-05-16 09:49:04.024993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.537 [2024-05-16 09:49:04.025007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.537 qpair failed and we were unable to recover it. 00:36:10.537 [2024-05-16 09:49:04.034840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.537 [2024-05-16 09:49:04.034893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.537 [2024-05-16 09:49:04.034907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.537 [2024-05-16 09:49:04.034914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.537 [2024-05-16 09:49:04.034920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.537 [2024-05-16 09:49:04.034934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.537 qpair failed and we were unable to recover it. 00:36:10.537 [2024-05-16 09:49:04.044768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.537 [2024-05-16 09:49:04.044815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.537 [2024-05-16 09:49:04.044829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.537 [2024-05-16 09:49:04.044837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.537 [2024-05-16 09:49:04.044843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.537 [2024-05-16 09:49:04.044857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.537 qpair failed and we were unable to recover it. 00:36:10.537 [2024-05-16 09:49:04.054967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.537 [2024-05-16 09:49:04.055019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.537 [2024-05-16 09:49:04.055033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.537 [2024-05-16 09:49:04.055040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.537 [2024-05-16 09:49:04.055046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.537 [2024-05-16 09:49:04.055065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.537 qpair failed and we were unable to recover it. 00:36:10.537 [2024-05-16 09:49:04.064956] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.537 [2024-05-16 09:49:04.065015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.537 [2024-05-16 09:49:04.065030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.537 [2024-05-16 09:49:04.065037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.537 [2024-05-16 09:49:04.065043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.537 [2024-05-16 09:49:04.065064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.537 qpair failed and we were unable to recover it. 00:36:10.537 [2024-05-16 09:49:04.074984] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.537 [2024-05-16 09:49:04.075031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.537 [2024-05-16 09:49:04.075045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.537 [2024-05-16 09:49:04.075058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.537 [2024-05-16 09:49:04.075064] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.537 [2024-05-16 09:49:04.075078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.537 qpair failed and we were unable to recover it. 00:36:10.537 [2024-05-16 09:49:04.085007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.537 [2024-05-16 09:49:04.085062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.537 [2024-05-16 09:49:04.085078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.537 [2024-05-16 09:49:04.085085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.537 [2024-05-16 09:49:04.085091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.537 [2024-05-16 09:49:04.085105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.537 qpair failed and we were unable to recover it. 00:36:10.800 [2024-05-16 09:49:04.095065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.800 [2024-05-16 09:49:04.095117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.800 [2024-05-16 09:49:04.095131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.800 [2024-05-16 09:49:04.095138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.800 [2024-05-16 09:49:04.095145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.800 [2024-05-16 09:49:04.095159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.800 qpair failed and we were unable to recover it. 00:36:10.800 [2024-05-16 09:49:04.105059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.800 [2024-05-16 09:49:04.105116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.800 [2024-05-16 09:49:04.105129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.800 [2024-05-16 09:49:04.105140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.800 [2024-05-16 09:49:04.105146] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.800 [2024-05-16 09:49:04.105160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.800 qpair failed and we were unable to recover it. 00:36:10.800 [2024-05-16 09:49:04.115134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.800 [2024-05-16 09:49:04.115189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.800 [2024-05-16 09:49:04.115203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.800 [2024-05-16 09:49:04.115210] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.800 [2024-05-16 09:49:04.115217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.800 [2024-05-16 09:49:04.115231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.800 qpair failed and we were unable to recover it. 00:36:10.801 [2024-05-16 09:49:04.125117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-05-16 09:49:04.125171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-05-16 09:49:04.125185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-05-16 09:49:04.125192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-05-16 09:49:04.125199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.801 [2024-05-16 09:49:04.125212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-05-16 09:49:04.135176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-05-16 09:49:04.135229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-05-16 09:49:04.135243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-05-16 09:49:04.135250] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-05-16 09:49:04.135256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.801 [2024-05-16 09:49:04.135270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-05-16 09:49:04.145132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-05-16 09:49:04.145189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-05-16 09:49:04.145204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-05-16 09:49:04.145211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-05-16 09:49:04.145217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.801 [2024-05-16 09:49:04.145231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-05-16 09:49:04.155182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-05-16 09:49:04.155235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-05-16 09:49:04.155249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-05-16 09:49:04.155256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-05-16 09:49:04.155262] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.801 [2024-05-16 09:49:04.155276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-05-16 09:49:04.165216] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-05-16 09:49:04.165267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-05-16 09:49:04.165282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-05-16 09:49:04.165289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-05-16 09:49:04.165295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.801 [2024-05-16 09:49:04.165309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-05-16 09:49:04.175202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-05-16 09:49:04.175259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-05-16 09:49:04.175273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-05-16 09:49:04.175280] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-05-16 09:49:04.175286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.801 [2024-05-16 09:49:04.175300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-05-16 09:49:04.185271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-05-16 09:49:04.185323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-05-16 09:49:04.185337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-05-16 09:49:04.185344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-05-16 09:49:04.185351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.801 [2024-05-16 09:49:04.185365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-05-16 09:49:04.195347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-05-16 09:49:04.195393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-05-16 09:49:04.195410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-05-16 09:49:04.195417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-05-16 09:49:04.195423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.801 [2024-05-16 09:49:04.195437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-05-16 09:49:04.205314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-05-16 09:49:04.205367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-05-16 09:49:04.205381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-05-16 09:49:04.205388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-05-16 09:49:04.205394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.801 [2024-05-16 09:49:04.205408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-05-16 09:49:04.215395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-05-16 09:49:04.215450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-05-16 09:49:04.215463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-05-16 09:49:04.215471] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-05-16 09:49:04.215477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.801 [2024-05-16 09:49:04.215491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-05-16 09:49:04.225358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-05-16 09:49:04.225424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-05-16 09:49:04.225437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-05-16 09:49:04.225445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-05-16 09:49:04.225452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.801 [2024-05-16 09:49:04.225465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-05-16 09:49:04.235403] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-05-16 09:49:04.235454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-05-16 09:49:04.235468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-05-16 09:49:04.235476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-05-16 09:49:04.235482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.801 [2024-05-16 09:49:04.235499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-05-16 09:49:04.245443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-05-16 09:49:04.245487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-05-16 09:49:04.245501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-05-16 09:49:04.245508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-05-16 09:49:04.245514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.801 [2024-05-16 09:49:04.245527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-05-16 09:49:04.255492] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-05-16 09:49:04.255584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-05-16 09:49:04.255598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-05-16 09:49:04.255605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-05-16 09:49:04.255611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.802 [2024-05-16 09:49:04.255625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-05-16 09:49:04.265500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-05-16 09:49:04.265559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-05-16 09:49:04.265573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-05-16 09:49:04.265580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-05-16 09:49:04.265587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.802 [2024-05-16 09:49:04.265600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-05-16 09:49:04.275501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-05-16 09:49:04.275549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-05-16 09:49:04.275563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-05-16 09:49:04.275570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-05-16 09:49:04.275576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0840000b90 00:36:10.802 [2024-05-16 09:49:04.275590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-05-16 09:49:04.285514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-05-16 09:49:04.285570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-05-16 09:49:04.285600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-05-16 09:49:04.285609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-05-16 09:49:04.285616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c54270 00:36:10.802 [2024-05-16 09:49:04.285634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-05-16 09:49:04.295598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-05-16 09:49:04.295651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-05-16 09:49:04.295667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-05-16 09:49:04.295674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-05-16 09:49:04.295681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c54270 00:36:10.802 [2024-05-16 09:49:04.295695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-05-16 09:49:04.305585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-05-16 09:49:04.305637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-05-16 09:49:04.305652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-05-16 09:49:04.305659] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-05-16 09:49:04.305665] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c54270 00:36:10.802 [2024-05-16 09:49:04.305679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-05-16 09:49:04.306082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c61e30 is same with the state(5) to be set 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Write completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Write completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Write completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Write completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Write completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Write completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Write completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Write completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Write completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Write completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Write completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Write completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 [2024-05-16 09:49:04.307002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:10.802 [2024-05-16 09:49:04.315620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-05-16 09:49:04.315726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-05-16 09:49:04.315789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-05-16 09:49:04.315813] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-05-16 09:49:04.315832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0830000b90 00:36:10.802 [2024-05-16 09:49:04.315885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-05-16 09:49:04.325619] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-05-16 09:49:04.325703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-05-16 09:49:04.325750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-05-16 09:49:04.325768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-05-16 09:49:04.325783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0830000b90 00:36:10.802 [2024-05-16 09:49:04.325821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Write completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Read completed with error (sct=0, sc=8) 00:36:10.802 starting I/O failed 00:36:10.802 Write completed with error (sct=0, sc=8) 00:36:10.803 starting I/O failed 00:36:10.803 Write completed with error (sct=0, sc=8) 00:36:10.803 starting I/O failed 00:36:10.803 Write completed with error (sct=0, sc=8) 00:36:10.803 starting I/O failed 00:36:10.803 Write completed with error (sct=0, sc=8) 00:36:10.803 starting I/O failed 00:36:10.803 Read completed with error (sct=0, sc=8) 00:36:10.803 starting I/O failed 00:36:10.803 Write completed with error (sct=0, sc=8) 00:36:10.803 starting I/O failed 00:36:10.803 Write completed with error (sct=0, sc=8) 00:36:10.803 starting I/O failed 00:36:10.803 Read completed with error (sct=0, sc=8) 00:36:10.803 starting I/O failed 00:36:10.803 Write completed with error (sct=0, sc=8) 00:36:10.803 starting I/O failed 00:36:10.803 Read completed with error (sct=0, sc=8) 00:36:10.803 starting I/O failed 00:36:10.803 Write completed with error (sct=0, sc=8) 00:36:10.803 starting I/O failed 00:36:10.803 Read completed with error (sct=0, sc=8) 00:36:10.803 starting I/O failed 00:36:10.803 Write completed with error (sct=0, sc=8) 00:36:10.803 starting I/O failed 00:36:10.803 Write completed with error (sct=0, sc=8) 00:36:10.803 starting I/O failed 00:36:10.803 Write completed with error (sct=0, sc=8) 00:36:10.803 starting I/O failed 00:36:10.803 Write completed with error (sct=0, sc=8) 00:36:10.803 starting I/O failed 00:36:10.803 Write completed with error (sct=0, sc=8) 00:36:10.803 starting I/O failed 00:36:10.803 [2024-05-16 09:49:04.326153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.803 [2024-05-16 09:49:04.335730] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.803 [2024-05-16 09:49:04.335781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.803 [2024-05-16 09:49:04.335795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.803 [2024-05-16 09:49:04.335801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.803 [2024-05-16 09:49:04.335806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0838000b90 00:36:10.803 [2024-05-16 09:49:04.335817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.803 qpair failed and we were unable to recover it. 00:36:10.803 [2024-05-16 09:49:04.345706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.803 [2024-05-16 09:49:04.345755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.803 [2024-05-16 09:49:04.345767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.803 [2024-05-16 09:49:04.345772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.803 [2024-05-16 09:49:04.345777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0838000b90 00:36:10.803 [2024-05-16 09:49:04.345787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:10.803 qpair failed and we were unable to recover it. 00:36:10.803 [2024-05-16 09:49:04.346193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c61e30 (9): Bad file descriptor 00:36:10.803 Initializing NVMe Controllers 00:36:10.803 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:10.803 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:10.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:10.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:10.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:10.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:10.803 Initialization complete. Launching workers. 00:36:10.803 Starting thread on core 1 00:36:10.803 Starting thread on core 2 00:36:10.803 Starting thread on core 3 00:36:10.803 Starting thread on core 0 00:36:10.803 09:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:11.064 00:36:11.064 real 0m11.374s 00:36:11.064 user 0m21.368s 00:36:11.064 sys 0m3.668s 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.064 ************************************ 00:36:11.064 END TEST nvmf_target_disconnect_tc2 00:36:11.064 ************************************ 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:11.064 rmmod nvme_tcp 00:36:11.064 rmmod nvme_fabrics 00:36:11.064 rmmod nvme_keyring 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 539614 ']' 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 539614 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 539614 ']' 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 539614 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 539614 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 539614' 00:36:11.064 killing process with pid 539614 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 539614 00:36:11.064 [2024-05-16 09:49:04.529529] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:11.064 09:49:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 539614 00:36:11.324 09:49:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:11.324 09:49:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:11.324 09:49:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:11.324 09:49:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:11.324 09:49:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:11.324 09:49:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.324 09:49:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:11.324 09:49:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:13.236 09:49:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:13.236 00:36:13.236 real 0m21.100s 00:36:13.236 user 0m48.863s 00:36:13.236 sys 0m9.298s 00:36:13.236 09:49:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:13.236 09:49:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:13.236 ************************************ 00:36:13.236 END TEST nvmf_target_disconnect 00:36:13.236 ************************************ 00:36:13.236 09:49:06 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:36:13.236 09:49:06 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:13.236 09:49:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:13.498 09:49:06 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:36:13.498 00:36:13.498 real 22m31.503s 00:36:13.498 user 47m39.925s 00:36:13.498 sys 6m52.599s 00:36:13.498 09:49:06 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:13.498 09:49:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:13.498 ************************************ 00:36:13.498 END TEST nvmf_tcp 00:36:13.498 ************************************ 00:36:13.498 09:49:06 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:36:13.498 09:49:06 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:13.498 09:49:06 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:36:13.498 09:49:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:13.498 09:49:06 -- common/autotest_common.sh@10 -- # set +x 00:36:13.498 ************************************ 00:36:13.498 START TEST spdkcli_nvmf_tcp 00:36:13.498 ************************************ 00:36:13.498 09:49:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:13.498 * Looking for test storage... 00:36:13.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:13.498 09:49:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:13.499 09:49:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:13.499 09:49:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:13.499 09:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:13.499 09:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:13.499 09:49:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:13.499 09:49:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=541462 00:36:13.499 09:49:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 541462 00:36:13.499 09:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 541462 ']' 00:36:13.499 09:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:13.499 09:49:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:13.499 09:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:13.499 09:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:13.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:13.499 09:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:13.499 09:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:13.760 [2024-05-16 09:49:07.100160] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:36:13.760 [2024-05-16 09:49:07.100214] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid541462 ] 00:36:13.760 EAL: No free 2048 kB hugepages reported on node 1 00:36:13.760 [2024-05-16 09:49:07.158970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:13.760 [2024-05-16 09:49:07.224906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:13.760 [2024-05-16 09:49:07.224908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:14.331 09:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:14.331 09:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:36:14.331 09:49:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:14.331 09:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:14.331 09:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:14.591 09:49:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:14.591 09:49:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:14.592 09:49:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:14.592 09:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:14.592 09:49:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:14.592 09:49:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:14.592 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:14.592 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:14.592 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:14.592 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:14.592 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:14.592 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:14.592 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:14.592 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:14.592 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:14.592 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:14.592 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:14.592 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:14.592 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:14.592 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:14.592 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:14.592 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:14.592 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:14.592 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:14.592 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:14.592 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:14.592 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:14.592 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:14.592 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:14.592 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:14.592 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:14.592 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:14.592 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:14.592 ' 00:36:17.131 [2024-05-16 09:49:10.523339] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:18.514 [2024-05-16 09:49:11.819276] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:18.514 [2024-05-16 09:49:11.819624] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:21.057 [2024-05-16 09:49:14.222747] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:22.969 [2024-05-16 09:49:16.305114] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:24.353 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:24.353 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:24.353 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:24.353 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:24.353 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:24.353 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:24.353 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:24.353 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:24.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:24.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:24.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:24.353 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:24.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:24.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:24.353 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:24.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:24.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:24.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:24.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:24.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:24.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:24.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:24.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:24.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:24.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:24.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:24.353 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:24.353 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:24.613 09:49:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:24.613 09:49:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:24.613 09:49:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:24.613 09:49:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:24.613 09:49:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:24.613 09:49:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:24.613 09:49:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:24.613 09:49:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:24.874 09:49:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:24.874 09:49:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:24.874 09:49:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:24.874 09:49:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:24.874 09:49:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:25.135 09:49:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:25.135 09:49:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:25.135 09:49:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:25.135 09:49:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:25.135 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:25.135 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:25.135 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:25.135 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:25.135 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:25.135 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:25.135 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:25.135 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:25.135 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:25.135 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:25.135 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:25.135 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:25.135 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:25.135 ' 00:36:30.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:30.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:30.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:30.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:30.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:30.414 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:30.414 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:30.414 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:30.414 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:30.414 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:30.414 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:30.414 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:30.414 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:30.414 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:30.414 09:49:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:30.414 09:49:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:30.414 09:49:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:30.414 09:49:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 541462 00:36:30.414 09:49:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 541462 ']' 00:36:30.414 09:49:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 541462 00:36:30.414 09:49:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:36:30.414 09:49:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:30.414 09:49:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 541462 00:36:30.414 09:49:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:30.414 09:49:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:30.414 09:49:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 541462' 00:36:30.414 killing process with pid 541462 00:36:30.414 09:49:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 541462 00:36:30.414 [2024-05-16 09:49:23.948427] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:30.414 09:49:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 541462 00:36:30.675 09:49:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:30.675 09:49:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:30.675 09:49:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 541462 ']' 00:36:30.675 09:49:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 541462 00:36:30.675 09:49:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 541462 ']' 00:36:30.675 09:49:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 541462 00:36:30.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (541462) - No such process 00:36:30.675 09:49:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 541462 is not found' 00:36:30.675 Process with pid 541462 is not found 00:36:30.675 09:49:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:30.675 09:49:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:30.675 09:49:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:30.675 00:36:30.675 real 0m17.164s 00:36:30.675 user 0m37.520s 00:36:30.675 sys 0m0.894s 00:36:30.675 09:49:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:30.675 09:49:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:30.675 ************************************ 00:36:30.675 END TEST spdkcli_nvmf_tcp 00:36:30.675 ************************************ 00:36:30.675 09:49:24 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:30.675 09:49:24 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:36:30.675 09:49:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:30.675 09:49:24 -- common/autotest_common.sh@10 -- # set +x 00:36:30.675 ************************************ 00:36:30.675 START TEST nvmf_identify_passthru 00:36:30.675 ************************************ 00:36:30.675 09:49:24 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:30.936 * Looking for test storage... 00:36:30.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:30.936 09:49:24 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:30.936 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:30.936 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:30.936 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:30.936 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:30.936 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:30.936 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:30.936 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:30.936 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:30.937 09:49:24 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:30.937 09:49:24 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:30.937 09:49:24 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:30.937 09:49:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.937 09:49:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.937 09:49:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.937 09:49:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:30.937 09:49:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:30.937 09:49:24 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:30.937 09:49:24 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:30.937 09:49:24 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:30.937 09:49:24 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:30.937 09:49:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.937 09:49:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.937 09:49:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.937 09:49:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:30.937 09:49:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.937 09:49:24 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:30.937 09:49:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:30.937 09:49:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:30.937 09:49:24 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:36:30.937 09:49:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:37.523 09:49:30 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:37.523 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:37.523 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:37.523 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:37.523 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:37.523 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:37.785 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:37.785 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:37.785 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:37.785 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:37.785 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:37.785 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:37.785 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:37.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:37.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:36:37.785 00:36:37.785 --- 10.0.0.2 ping statistics --- 00:36:37.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:37.785 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:36:37.785 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:37.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:37.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:36:37.785 00:36:37.785 --- 10.0.0.1 ping statistics --- 00:36:37.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:37.785 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:36:37.785 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:37.785 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:36:37.785 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:37.785 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:37.785 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:37.785 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:37.785 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:37.785 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:37.785 09:49:31 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:37.785 09:49:31 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:37.785 09:49:31 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:37.785 09:49:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:38.047 09:49:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:38.047 09:49:31 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:36:38.047 09:49:31 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:36:38.047 09:49:31 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:36:38.047 09:49:31 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:36:38.047 09:49:31 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:36:38.047 09:49:31 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:36:38.047 09:49:31 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:38.047 09:49:31 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:38.047 09:49:31 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:36:38.047 09:49:31 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:36:38.047 09:49:31 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:36:38.047 09:49:31 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:65:00.0 00:36:38.047 09:49:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:38.047 09:49:31 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:38.047 09:49:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:38.047 09:49:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:38.047 09:49:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:38.047 EAL: No free 2048 kB hugepages reported on node 1 00:36:38.620 09:49:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605480 00:36:38.620 09:49:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:38.620 09:49:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:38.620 09:49:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:38.620 EAL: No free 2048 kB hugepages reported on node 1 00:36:38.880 09:49:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:38.880 09:49:32 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:38.880 09:49:32 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:38.880 09:49:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:38.880 09:49:32 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:38.880 09:49:32 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:38.880 09:49:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:38.880 09:49:32 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=548758 00:36:38.880 09:49:32 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:38.880 09:49:32 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:38.880 09:49:32 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 548758 00:36:38.880 09:49:32 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 548758 ']' 00:36:38.880 09:49:32 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:38.880 09:49:32 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:38.880 09:49:32 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:38.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:38.880 09:49:32 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:38.880 09:49:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:39.141 [2024-05-16 09:49:32.483913] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:36:39.141 [2024-05-16 09:49:32.483974] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:39.141 EAL: No free 2048 kB hugepages reported on node 1 00:36:39.141 [2024-05-16 09:49:32.552769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:39.141 [2024-05-16 09:49:32.627691] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:39.141 [2024-05-16 09:49:32.627731] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:39.142 [2024-05-16 09:49:32.627738] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:39.142 [2024-05-16 09:49:32.627745] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:39.142 [2024-05-16 09:49:32.627750] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:39.142 [2024-05-16 09:49:32.627888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:39.142 [2024-05-16 09:49:32.628003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:39.142 [2024-05-16 09:49:32.628162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:39.142 [2024-05-16 09:49:32.628162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:39.710 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:39.710 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:36:39.710 09:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:39.710 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.710 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:39.710 INFO: Log level set to 20 00:36:39.710 INFO: Requests: 00:36:39.710 { 00:36:39.710 "jsonrpc": "2.0", 00:36:39.710 "method": "nvmf_set_config", 00:36:39.710 "id": 1, 00:36:39.710 "params": { 00:36:39.710 "admin_cmd_passthru": { 00:36:39.710 "identify_ctrlr": true 00:36:39.710 } 00:36:39.710 } 00:36:39.710 } 00:36:39.710 00:36:39.970 INFO: response: 00:36:39.970 { 00:36:39.970 "jsonrpc": "2.0", 00:36:39.970 "id": 1, 00:36:39.970 "result": true 00:36:39.970 } 00:36:39.970 00:36:39.970 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.970 09:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:39.970 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.970 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:39.971 INFO: Setting log level to 20 00:36:39.971 INFO: Setting log level to 20 00:36:39.971 INFO: Log level set to 20 00:36:39.971 INFO: Log level set to 20 00:36:39.971 INFO: Requests: 00:36:39.971 { 00:36:39.971 "jsonrpc": "2.0", 00:36:39.971 "method": "framework_start_init", 00:36:39.971 "id": 1 00:36:39.971 } 00:36:39.971 00:36:39.971 INFO: Requests: 00:36:39.971 { 00:36:39.971 "jsonrpc": "2.0", 00:36:39.971 "method": "framework_start_init", 00:36:39.971 "id": 1 00:36:39.971 } 00:36:39.971 00:36:39.971 [2024-05-16 09:49:33.346476] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:39.971 INFO: response: 00:36:39.971 { 00:36:39.971 "jsonrpc": "2.0", 00:36:39.971 "id": 1, 00:36:39.971 "result": true 00:36:39.971 } 00:36:39.971 00:36:39.971 INFO: response: 00:36:39.971 { 00:36:39.971 "jsonrpc": "2.0", 00:36:39.971 "id": 1, 00:36:39.971 "result": true 00:36:39.971 } 00:36:39.971 00:36:39.971 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.971 09:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:39.971 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.971 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:39.971 INFO: Setting log level to 40 00:36:39.971 INFO: Setting log level to 40 00:36:39.971 INFO: Setting log level to 40 00:36:39.971 [2024-05-16 09:49:33.359731] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:39.971 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:39.971 09:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:39.971 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:39.971 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:39.971 09:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:39.971 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:39.971 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.231 Nvme0n1 00:36:40.231 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.231 09:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:40.231 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.231 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.231 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.231 09:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:40.231 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.231 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.231 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.231 09:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:40.231 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.231 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.231 [2024-05-16 09:49:33.746126] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:40.231 [2024-05-16 09:49:33.746358] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:40.231 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.231 09:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:40.231 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.231 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.231 [ 00:36:40.231 { 00:36:40.231 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:40.231 "subtype": "Discovery", 00:36:40.231 "listen_addresses": [], 00:36:40.231 "allow_any_host": true, 00:36:40.231 "hosts": [] 00:36:40.231 }, 00:36:40.231 { 00:36:40.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:40.231 "subtype": "NVMe", 00:36:40.231 "listen_addresses": [ 00:36:40.231 { 00:36:40.231 "trtype": "TCP", 00:36:40.231 "adrfam": "IPv4", 00:36:40.231 "traddr": "10.0.0.2", 00:36:40.231 "trsvcid": "4420" 00:36:40.232 } 00:36:40.232 ], 00:36:40.232 "allow_any_host": true, 00:36:40.232 "hosts": [], 00:36:40.232 "serial_number": "SPDK00000000000001", 00:36:40.232 "model_number": "SPDK bdev Controller", 00:36:40.232 "max_namespaces": 1, 00:36:40.232 "min_cntlid": 1, 00:36:40.232 "max_cntlid": 65519, 00:36:40.232 "namespaces": [ 00:36:40.232 { 00:36:40.232 "nsid": 1, 00:36:40.232 "bdev_name": "Nvme0n1", 00:36:40.232 "name": "Nvme0n1", 00:36:40.232 "nguid": "3634473052605480002538450000003B", 00:36:40.232 "uuid": "36344730-5260-5480-0025-38450000003b" 00:36:40.232 } 00:36:40.232 ] 00:36:40.232 } 00:36:40.232 ] 00:36:40.232 09:49:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.232 09:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:40.232 09:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:40.232 09:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:40.492 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.492 09:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605480 00:36:40.492 09:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:40.492 09:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:40.492 09:49:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:40.492 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.752 09:49:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:36:40.752 09:49:34 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605480 '!=' S64GNE0R605480 ']' 00:36:40.752 09:49:34 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:36:40.752 09:49:34 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:40.752 09:49:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.752 09:49:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:40.752 09:49:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.752 09:49:34 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:40.752 09:49:34 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:40.752 09:49:34 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:40.752 09:49:34 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:36:40.752 09:49:34 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:40.752 09:49:34 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:36:40.752 09:49:34 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:40.752 09:49:34 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:40.752 rmmod nvme_tcp 00:36:40.752 rmmod nvme_fabrics 00:36:40.752 rmmod nvme_keyring 00:36:40.752 09:49:34 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:40.752 09:49:34 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:36:40.752 09:49:34 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:36:40.752 09:49:34 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 548758 ']' 00:36:40.752 09:49:34 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 548758 00:36:40.752 09:49:34 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 548758 ']' 00:36:40.752 09:49:34 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 548758 00:36:40.752 09:49:34 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:36:40.752 09:49:34 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:40.752 09:49:34 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 548758 00:36:40.752 09:49:34 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:40.752 09:49:34 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:40.752 09:49:34 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 548758' 00:36:40.752 killing process with pid 548758 00:36:40.752 09:49:34 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 548758 00:36:40.752 [2024-05-16 09:49:34.208219] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:40.752 09:49:34 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 548758 00:36:41.012 09:49:34 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:41.012 09:49:34 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:41.012 09:49:34 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:41.012 09:49:34 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:41.012 09:49:34 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:41.012 09:49:34 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:41.012 09:49:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:41.012 09:49:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:43.555 09:49:36 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:43.555 00:36:43.555 real 0m12.382s 00:36:43.555 user 0m9.669s 00:36:43.555 sys 0m5.931s 00:36:43.555 09:49:36 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:43.555 09:49:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:43.555 ************************************ 00:36:43.555 END TEST nvmf_identify_passthru 00:36:43.555 ************************************ 00:36:43.555 09:49:36 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:43.555 09:49:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:43.555 09:49:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:43.555 09:49:36 -- common/autotest_common.sh@10 -- # set +x 00:36:43.555 ************************************ 00:36:43.555 START TEST nvmf_dif 00:36:43.555 ************************************ 00:36:43.555 09:49:36 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:43.555 * Looking for test storage... 00:36:43.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:43.556 09:49:36 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:43.556 09:49:36 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:43.556 09:49:36 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:43.556 09:49:36 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:43.556 09:49:36 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.556 09:49:36 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.556 09:49:36 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.556 09:49:36 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:43.556 09:49:36 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:43.556 09:49:36 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:43.556 09:49:36 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:43.556 09:49:36 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:43.556 09:49:36 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:43.556 09:49:36 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:43.556 09:49:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:43.556 09:49:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:43.556 09:49:36 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:36:43.556 09:49:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:50.140 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:50.140 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:50.140 09:49:43 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:50.141 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:50.141 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:50.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:50.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:36:50.141 00:36:50.141 --- 10.0.0.2 ping statistics --- 00:36:50.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:50.141 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:50.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:50.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:36:50.141 00:36:50.141 --- 10.0.0.1 ping statistics --- 00:36:50.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:50.141 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:50.141 09:49:43 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:53.443 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:53.443 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:53.443 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:53.443 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:53.443 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:53.443 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:53.443 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:53.443 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:53.443 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:53.443 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:53.443 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:53.443 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:53.443 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:53.443 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:53.443 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:53.443 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:53.443 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:53.443 09:49:46 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:53.443 09:49:46 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:53.443 09:49:46 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:53.443 09:49:46 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:53.443 09:49:46 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:53.443 09:49:46 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:53.443 09:49:46 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:53.443 09:49:46 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:53.443 09:49:46 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:53.443 09:49:46 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:53.443 09:49:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:53.443 09:49:46 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=554639 00:36:53.443 09:49:46 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 554639 00:36:53.443 09:49:46 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:53.443 09:49:46 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 554639 ']' 00:36:53.443 09:49:46 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:53.443 09:49:46 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:53.443 09:49:46 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:53.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:53.443 09:49:46 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:53.443 09:49:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:53.443 [2024-05-16 09:49:46.955667] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:36:53.443 [2024-05-16 09:49:46.955753] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:53.443 EAL: No free 2048 kB hugepages reported on node 1 00:36:53.703 [2024-05-16 09:49:47.027262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.703 [2024-05-16 09:49:47.100205] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:53.703 [2024-05-16 09:49:47.100242] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:53.703 [2024-05-16 09:49:47.100250] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:53.703 [2024-05-16 09:49:47.100256] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:53.703 [2024-05-16 09:49:47.100262] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:53.703 [2024-05-16 09:49:47.100279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:54.274 09:49:47 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:54.274 09:49:47 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:36:54.274 09:49:47 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:54.274 09:49:47 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:54.274 09:49:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:54.274 09:49:47 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:54.274 09:49:47 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:54.274 09:49:47 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:54.274 09:49:47 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.274 09:49:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:54.274 [2024-05-16 09:49:47.759316] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:54.274 09:49:47 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.274 09:49:47 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:54.274 09:49:47 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:54.274 09:49:47 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:54.274 09:49:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:54.274 ************************************ 00:36:54.274 START TEST fio_dif_1_default 00:36:54.274 ************************************ 00:36:54.274 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:36:54.274 09:49:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:54.274 09:49:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:54.274 09:49:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:54.274 09:49:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:54.274 09:49:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:54.274 09:49:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:54.274 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.274 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:54.274 bdev_null0 00:36:54.274 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.274 09:49:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:54.274 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.274 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:54.274 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.274 09:49:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:54.274 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.274 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:54.535 [2024-05-16 09:49:47.847495] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:54.535 [2024-05-16 09:49:47.847694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:54.535 { 00:36:54.535 "params": { 00:36:54.535 "name": "Nvme$subsystem", 00:36:54.535 "trtype": "$TEST_TRANSPORT", 00:36:54.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:54.535 "adrfam": "ipv4", 00:36:54.535 "trsvcid": "$NVMF_PORT", 00:36:54.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:54.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:54.535 "hdgst": ${hdgst:-false}, 00:36:54.535 "ddgst": ${ddgst:-false} 00:36:54.535 }, 00:36:54.535 "method": "bdev_nvme_attach_controller" 00:36:54.535 } 00:36:54.535 EOF 00:36:54.535 )") 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:36:54.535 09:49:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:36:54.536 09:49:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:54.536 "params": { 00:36:54.536 "name": "Nvme0", 00:36:54.536 "trtype": "tcp", 00:36:54.536 "traddr": "10.0.0.2", 00:36:54.536 "adrfam": "ipv4", 00:36:54.536 "trsvcid": "4420", 00:36:54.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:54.536 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:54.536 "hdgst": false, 00:36:54.536 "ddgst": false 00:36:54.536 }, 00:36:54.536 "method": "bdev_nvme_attach_controller" 00:36:54.536 }' 00:36:54.536 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:54.536 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:54.536 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:54.536 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:54.536 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:54.536 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:54.536 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:54.536 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:54.536 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:54.536 09:49:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:54.797 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:54.797 fio-3.35 00:36:54.797 Starting 1 thread 00:36:54.797 EAL: No free 2048 kB hugepages reported on node 1 00:37:07.025 00:37:07.026 filename0: (groupid=0, jobs=1): err= 0: pid=555126: Thu May 16 09:49:58 2024 00:37:07.026 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10020msec) 00:37:07.026 slat (nsec): min=5645, max=42379, avg=6609.57, stdev=1821.92 00:37:07.026 clat (usec): min=40820, max=43215, avg=41044.51, stdev=289.11 00:37:07.026 lat (usec): min=40826, max=43257, avg=41051.12, stdev=289.36 00:37:07.026 clat percentiles (usec): 00:37:07.026 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:07.026 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:07.026 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:07.026 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:37:07.026 | 99.99th=[43254] 00:37:07.026 bw ( KiB/s): min= 384, max= 416, per=99.58%, avg=388.80, stdev=11.72, samples=20 00:37:07.026 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:37:07.026 lat (msec) : 50=100.00% 00:37:07.026 cpu : usr=95.43%, sys=4.37%, ctx=13, majf=0, minf=233 00:37:07.026 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:07.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.026 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:07.026 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:07.026 00:37:07.026 Run status group 0 (all jobs): 00:37:07.026 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10020-10020msec 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.026 00:37:07.026 real 0m11.094s 00:37:07.026 user 0m24.735s 00:37:07.026 sys 0m0.767s 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:07.026 ************************************ 00:37:07.026 END TEST fio_dif_1_default 00:37:07.026 ************************************ 00:37:07.026 09:49:58 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:07.026 09:49:58 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:07.026 09:49:58 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:07.026 09:49:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:07.026 ************************************ 00:37:07.026 START TEST fio_dif_1_multi_subsystems 00:37:07.026 ************************************ 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:07.026 bdev_null0 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.026 09:49:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:07.026 [2024-05-16 09:49:59.029888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:07.026 bdev_null1 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:07.026 { 00:37:07.026 "params": { 00:37:07.026 "name": "Nvme$subsystem", 00:37:07.026 "trtype": "$TEST_TRANSPORT", 00:37:07.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:07.026 "adrfam": "ipv4", 00:37:07.026 "trsvcid": "$NVMF_PORT", 00:37:07.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:07.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:07.026 "hdgst": ${hdgst:-false}, 00:37:07.026 "ddgst": ${ddgst:-false} 00:37:07.026 }, 00:37:07.026 "method": "bdev_nvme_attach_controller" 00:37:07.026 } 00:37:07.026 EOF 00:37:07.026 )") 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:37:07.026 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:07.027 { 00:37:07.027 "params": { 00:37:07.027 "name": "Nvme$subsystem", 00:37:07.027 "trtype": "$TEST_TRANSPORT", 00:37:07.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:07.027 "adrfam": "ipv4", 00:37:07.027 "trsvcid": "$NVMF_PORT", 00:37:07.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:07.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:07.027 "hdgst": ${hdgst:-false}, 00:37:07.027 "ddgst": ${ddgst:-false} 00:37:07.027 }, 00:37:07.027 "method": "bdev_nvme_attach_controller" 00:37:07.027 } 00:37:07.027 EOF 00:37:07.027 )") 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:07.027 "params": { 00:37:07.027 "name": "Nvme0", 00:37:07.027 "trtype": "tcp", 00:37:07.027 "traddr": "10.0.0.2", 00:37:07.027 "adrfam": "ipv4", 00:37:07.027 "trsvcid": "4420", 00:37:07.027 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:07.027 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:07.027 "hdgst": false, 00:37:07.027 "ddgst": false 00:37:07.027 }, 00:37:07.027 "method": "bdev_nvme_attach_controller" 00:37:07.027 },{ 00:37:07.027 "params": { 00:37:07.027 "name": "Nvme1", 00:37:07.027 "trtype": "tcp", 00:37:07.027 "traddr": "10.0.0.2", 00:37:07.027 "adrfam": "ipv4", 00:37:07.027 "trsvcid": "4420", 00:37:07.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:07.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:07.027 "hdgst": false, 00:37:07.027 "ddgst": false 00:37:07.027 }, 00:37:07.027 "method": "bdev_nvme_attach_controller" 00:37:07.027 }' 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:07.027 09:49:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:07.027 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:07.027 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:07.027 fio-3.35 00:37:07.027 Starting 2 threads 00:37:07.027 EAL: No free 2048 kB hugepages reported on node 1 00:37:17.030 00:37:17.030 filename0: (groupid=0, jobs=1): err= 0: pid=557426: Thu May 16 09:50:10 2024 00:37:17.030 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10038msec) 00:37:17.030 slat (nsec): min=5661, max=58883, avg=6686.39, stdev=2522.68 00:37:17.030 clat (usec): min=40973, max=43943, avg=41981.43, stdev=157.67 00:37:17.030 lat (usec): min=40978, max=44002, avg=41988.12, stdev=158.58 00:37:17.030 clat percentiles (usec): 00:37:17.030 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:37:17.030 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:37:17.030 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:17.030 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:37:17.030 | 99.99th=[43779] 00:37:17.030 bw ( KiB/s): min= 352, max= 384, per=33.39%, avg=380.80, stdev= 9.85, samples=20 00:37:17.030 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:37:17.030 lat (msec) : 50=100.00% 00:37:17.030 cpu : usr=97.25%, sys=2.51%, ctx=17, majf=0, minf=206 00:37:17.030 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:17.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.030 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.030 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:17.030 filename1: (groupid=0, jobs=1): err= 0: pid=557427: Thu May 16 09:50:10 2024 00:37:17.030 read: IOPS=189, BW=758KiB/s (777kB/s)(7600KiB/10020msec) 00:37:17.030 slat (nsec): min=2979, max=14365, avg=5846.82, stdev=298.88 00:37:17.030 clat (usec): min=619, max=46822, avg=21078.59, stdev=20117.63 00:37:17.030 lat (usec): min=625, max=46836, avg=21084.43, stdev=20117.60 00:37:17.030 clat percentiles (usec): 00:37:17.030 | 1.00th=[ 717], 5.00th=[ 898], 10.00th=[ 914], 20.00th=[ 930], 00:37:17.030 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[41157], 60.00th=[41157], 00:37:17.030 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:17.030 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:37:17.030 | 99.99th=[46924] 00:37:17.030 bw ( KiB/s): min= 704, max= 768, per=66.60%, avg=758.40, stdev=23.45, samples=20 00:37:17.030 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:37:17.030 lat (usec) : 750=1.37%, 1000=47.79% 00:37:17.030 lat (msec) : 2=0.74%, 50=50.11% 00:37:17.030 cpu : usr=97.31%, sys=2.50%, ctx=12, majf=0, minf=34 00:37:17.030 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:17.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.030 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.030 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:17.030 00:37:17.030 Run status group 0 (all jobs): 00:37:17.030 READ: bw=1138KiB/s (1165kB/s), 381KiB/s-758KiB/s (390kB/s-777kB/s), io=11.2MiB (11.7MB), run=10020-10038msec 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.030 00:37:17.030 real 0m11.433s 00:37:17.030 user 0m32.281s 00:37:17.030 sys 0m0.875s 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:17.030 09:50:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:17.030 ************************************ 00:37:17.030 END TEST fio_dif_1_multi_subsystems 00:37:17.030 ************************************ 00:37:17.030 09:50:10 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:17.030 09:50:10 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:17.030 09:50:10 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:17.030 09:50:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:17.030 ************************************ 00:37:17.030 START TEST fio_dif_rand_params 00:37:17.030 ************************************ 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:17.030 bdev_null0 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:17.030 [2024-05-16 09:50:10.546020] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:17.030 09:50:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:17.031 { 00:37:17.031 "params": { 00:37:17.031 "name": "Nvme$subsystem", 00:37:17.031 "trtype": "$TEST_TRANSPORT", 00:37:17.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:17.031 "adrfam": "ipv4", 00:37:17.031 "trsvcid": "$NVMF_PORT", 00:37:17.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:17.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:17.031 "hdgst": ${hdgst:-false}, 00:37:17.031 "ddgst": ${ddgst:-false} 00:37:17.031 }, 00:37:17.031 "method": "bdev_nvme_attach_controller" 00:37:17.031 } 00:37:17.031 EOF 00:37:17.031 )") 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:17.031 09:50:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:17.031 "params": { 00:37:17.031 "name": "Nvme0", 00:37:17.031 "trtype": "tcp", 00:37:17.031 "traddr": "10.0.0.2", 00:37:17.031 "adrfam": "ipv4", 00:37:17.031 "trsvcid": "4420", 00:37:17.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:17.031 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:17.031 "hdgst": false, 00:37:17.031 "ddgst": false 00:37:17.031 }, 00:37:17.031 "method": "bdev_nvme_attach_controller" 00:37:17.031 }' 00:37:17.312 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:37:17.312 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:37:17.312 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:37:17.312 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:17.312 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:37:17.312 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:37:17.312 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:37:17.312 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:37:17.312 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:17.312 09:50:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:17.580 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:17.580 ... 00:37:17.580 fio-3.35 00:37:17.580 Starting 3 threads 00:37:17.580 EAL: No free 2048 kB hugepages reported on node 1 00:37:24.180 00:37:24.180 filename0: (groupid=0, jobs=1): err= 0: pid=559860: Thu May 16 09:50:16 2024 00:37:24.180 read: IOPS=245, BW=30.7MiB/s (32.2MB/s)(155MiB/5046msec) 00:37:24.180 slat (nsec): min=5692, max=49493, avg=6995.50, stdev=2134.62 00:37:24.180 clat (usec): min=6610, max=54210, avg=12154.62, stdev=4319.42 00:37:24.180 lat (usec): min=6616, max=54216, avg=12161.62, stdev=4319.35 00:37:24.180 clat percentiles (usec): 00:37:24.180 | 1.00th=[ 7242], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10421], 00:37:24.180 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11994], 60.00th=[12256], 00:37:24.180 | 70.00th=[12649], 80.00th=[13173], 90.00th=[13829], 95.00th=[14353], 00:37:24.180 | 99.00th=[47449], 99.50th=[49546], 99.90th=[51643], 99.95th=[54264], 00:37:24.180 | 99.99th=[54264] 00:37:24.180 bw ( KiB/s): min=26624, max=33792, per=34.00%, avg=31718.40, stdev=2049.60, samples=10 00:37:24.180 iops : min= 208, max= 264, avg=247.80, stdev=16.01, samples=10 00:37:24.180 lat (msec) : 10=15.39%, 20=83.48%, 50=0.81%, 100=0.32% 00:37:24.180 cpu : usr=95.12%, sys=4.64%, ctx=16, majf=0, minf=125 00:37:24.180 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:24.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.180 issued rwts: total=1241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:24.180 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:24.180 filename0: (groupid=0, jobs=1): err= 0: pid=559861: Thu May 16 09:50:16 2024 00:37:24.180 read: IOPS=251, BW=31.4MiB/s (32.9MB/s)(158MiB/5043msec) 00:37:24.180 slat (nsec): min=5723, max=33388, avg=7683.27, stdev=1830.43 00:37:24.180 clat (usec): min=5439, max=53640, avg=11897.62, stdev=4063.50 00:37:24.180 lat (usec): min=5445, max=53647, avg=11905.30, stdev=4063.54 00:37:24.180 clat percentiles (usec): 00:37:24.180 | 1.00th=[ 6849], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[10028], 00:37:24.180 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11600], 60.00th=[12125], 00:37:24.180 | 70.00th=[12649], 80.00th=[13304], 90.00th=[14091], 95.00th=[14746], 00:37:24.180 | 99.00th=[17433], 99.50th=[49546], 99.90th=[53216], 99.95th=[53740], 00:37:24.180 | 99.99th=[53740] 00:37:24.180 bw ( KiB/s): min=27392, max=37632, per=34.71%, avg=32384.00, stdev=2734.00, samples=10 00:37:24.180 iops : min= 214, max= 294, avg=253.00, stdev=21.36, samples=10 00:37:24.180 lat (msec) : 10=19.42%, 20=79.72%, 50=0.55%, 100=0.32% 00:37:24.180 cpu : usr=94.57%, sys=5.20%, ctx=8, majf=0, minf=114 00:37:24.180 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:24.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.180 issued rwts: total=1267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:24.180 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:24.180 filename0: (groupid=0, jobs=1): err= 0: pid=559862: Thu May 16 09:50:16 2024 00:37:24.180 read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(146MiB/5046msec) 00:37:24.180 slat (nsec): min=5709, max=49742, avg=6911.19, stdev=2020.26 00:37:24.180 clat (usec): min=6514, max=53385, avg=12890.92, stdev=6318.52 00:37:24.180 lat (usec): min=6520, max=53394, avg=12897.83, stdev=6318.64 00:37:24.180 clat percentiles (usec): 00:37:24.180 | 1.00th=[ 7373], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[10552], 00:37:24.180 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11994], 60.00th=[12387], 00:37:24.180 | 70.00th=[13042], 80.00th=[13698], 90.00th=[14353], 95.00th=[15139], 00:37:24.180 | 99.00th=[51119], 99.50th=[52167], 99.90th=[52167], 99.95th=[53216], 00:37:24.180 | 99.99th=[53216] 00:37:24.180 bw ( KiB/s): min=22528, max=34304, per=32.05%, avg=29906.00, stdev=3340.05, samples=10 00:37:24.180 iops : min= 176, max= 268, avg=233.60, stdev=26.14, samples=10 00:37:24.180 lat (msec) : 10=11.62%, 20=85.90%, 50=0.43%, 100=2.05% 00:37:24.180 cpu : usr=95.16%, sys=4.58%, ctx=14, majf=0, minf=126 00:37:24.180 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:24.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.180 issued rwts: total=1170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:24.180 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:24.180 00:37:24.180 Run status group 0 (all jobs): 00:37:24.180 READ: bw=91.1MiB/s (95.5MB/s), 29.0MiB/s-31.4MiB/s (30.4MB/s-32.9MB/s), io=460MiB (482MB), run=5043-5046msec 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:24.180 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:24.181 bdev_null0 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:24.181 [2024-05-16 09:50:16.857722] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:24.181 bdev_null1 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:24.181 bdev_null2 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:24.181 { 00:37:24.181 "params": { 00:37:24.181 "name": "Nvme$subsystem", 00:37:24.181 "trtype": "$TEST_TRANSPORT", 00:37:24.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:24.181 "adrfam": "ipv4", 00:37:24.181 "trsvcid": "$NVMF_PORT", 00:37:24.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:24.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:24.181 "hdgst": ${hdgst:-false}, 00:37:24.181 "ddgst": ${ddgst:-false} 00:37:24.181 }, 00:37:24.181 "method": "bdev_nvme_attach_controller" 00:37:24.181 } 00:37:24.181 EOF 00:37:24.181 )") 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:24.181 { 00:37:24.181 "params": { 00:37:24.181 "name": "Nvme$subsystem", 00:37:24.181 "trtype": "$TEST_TRANSPORT", 00:37:24.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:24.181 "adrfam": "ipv4", 00:37:24.181 "trsvcid": "$NVMF_PORT", 00:37:24.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:24.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:24.181 "hdgst": ${hdgst:-false}, 00:37:24.181 "ddgst": ${ddgst:-false} 00:37:24.181 }, 00:37:24.181 "method": "bdev_nvme_attach_controller" 00:37:24.181 } 00:37:24.181 EOF 00:37:24.181 )") 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:24.181 09:50:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:24.181 { 00:37:24.181 "params": { 00:37:24.181 "name": "Nvme$subsystem", 00:37:24.181 "trtype": "$TEST_TRANSPORT", 00:37:24.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:24.181 "adrfam": "ipv4", 00:37:24.181 "trsvcid": "$NVMF_PORT", 00:37:24.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:24.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:24.182 "hdgst": ${hdgst:-false}, 00:37:24.182 "ddgst": ${ddgst:-false} 00:37:24.182 }, 00:37:24.182 "method": "bdev_nvme_attach_controller" 00:37:24.182 } 00:37:24.182 EOF 00:37:24.182 )") 00:37:24.182 09:50:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:24.182 09:50:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:24.182 09:50:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:24.182 09:50:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:24.182 "params": { 00:37:24.182 "name": "Nvme0", 00:37:24.182 "trtype": "tcp", 00:37:24.182 "traddr": "10.0.0.2", 00:37:24.182 "adrfam": "ipv4", 00:37:24.182 "trsvcid": "4420", 00:37:24.182 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:24.182 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:24.182 "hdgst": false, 00:37:24.182 "ddgst": false 00:37:24.182 }, 00:37:24.182 "method": "bdev_nvme_attach_controller" 00:37:24.182 },{ 00:37:24.182 "params": { 00:37:24.182 "name": "Nvme1", 00:37:24.182 "trtype": "tcp", 00:37:24.182 "traddr": "10.0.0.2", 00:37:24.182 "adrfam": "ipv4", 00:37:24.182 "trsvcid": "4420", 00:37:24.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:24.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:24.182 "hdgst": false, 00:37:24.182 "ddgst": false 00:37:24.182 }, 00:37:24.182 "method": "bdev_nvme_attach_controller" 00:37:24.182 },{ 00:37:24.182 "params": { 00:37:24.182 "name": "Nvme2", 00:37:24.182 "trtype": "tcp", 00:37:24.182 "traddr": "10.0.0.2", 00:37:24.182 "adrfam": "ipv4", 00:37:24.182 "trsvcid": "4420", 00:37:24.182 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:24.182 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:24.182 "hdgst": false, 00:37:24.182 "ddgst": false 00:37:24.182 }, 00:37:24.182 "method": "bdev_nvme_attach_controller" 00:37:24.182 }' 00:37:24.182 09:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:37:24.182 09:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:37:24.182 09:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:37:24.182 09:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:24.182 09:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:37:24.182 09:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:37:24.182 09:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:37:24.182 09:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:37:24.182 09:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:24.182 09:50:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:24.182 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:24.182 ... 00:37:24.182 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:24.182 ... 00:37:24.182 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:24.182 ... 00:37:24.182 fio-3.35 00:37:24.182 Starting 24 threads 00:37:24.182 EAL: No free 2048 kB hugepages reported on node 1 00:37:36.410 00:37:36.410 filename0: (groupid=0, jobs=1): err= 0: pid=561242: Thu May 16 09:50:28 2024 00:37:36.410 read: IOPS=535, BW=2142KiB/s (2193kB/s)(20.9MiB/10013msec) 00:37:36.410 slat (usec): min=5, max=115, avg=16.46, stdev=13.99 00:37:36.410 clat (usec): min=2653, max=58579, avg=29733.20, stdev=5564.47 00:37:36.410 lat (usec): min=2668, max=58587, avg=29749.66, stdev=5567.03 00:37:36.410 clat percentiles (usec): 00:37:36.410 | 1.00th=[ 3458], 5.00th=[18220], 10.00th=[22152], 20.00th=[30016], 00:37:36.410 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:36.410 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:37:36.410 | 99.00th=[33817], 99.50th=[33817], 99.90th=[44303], 99.95th=[48497], 00:37:36.410 | 99.99th=[58459] 00:37:36.410 bw ( KiB/s): min= 1920, max= 2880, per=4.47%, avg=2138.40, stdev=295.95, samples=20 00:37:36.410 iops : min= 480, max= 720, avg=534.60, stdev=73.99, samples=20 00:37:36.410 lat (msec) : 4=1.36%, 10=0.43%, 20=6.01%, 50=92.17%, 100=0.04% 00:37:36.410 cpu : usr=99.30%, sys=0.37%, ctx=20, majf=0, minf=9 00:37:36.410 IO depths : 1=4.8%, 2=9.9%, 4=21.5%, 8=56.1%, 16=7.8%, 32=0.0%, >=64=0.0% 00:37:36.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.410 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.410 issued rwts: total=5362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.410 filename0: (groupid=0, jobs=1): err= 0: pid=561243: Thu May 16 09:50:28 2024 00:37:36.410 read: IOPS=505, BW=2023KiB/s (2072kB/s)(19.8MiB/10021msec) 00:37:36.410 slat (nsec): min=5680, max=87334, avg=14096.18, stdev=10491.18 00:37:36.410 clat (usec): min=10829, max=54228, avg=31520.54, stdev=3527.68 00:37:36.410 lat (usec): min=10838, max=54246, avg=31534.64, stdev=3529.04 00:37:36.410 clat percentiles (usec): 00:37:36.410 | 1.00th=[14353], 5.00th=[24511], 10.00th=[31065], 20.00th=[31851], 00:37:36.410 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:36.410 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:37:36.410 | 99.00th=[38011], 99.50th=[40109], 99.90th=[54264], 99.95th=[54264], 00:37:36.410 | 99.99th=[54264] 00:37:36.410 bw ( KiB/s): min= 1920, max= 2320, per=4.22%, avg=2021.20, stdev=101.07, samples=20 00:37:36.410 iops : min= 480, max= 580, avg=505.30, stdev=25.27, samples=20 00:37:36.410 lat (msec) : 20=2.43%, 50=97.30%, 100=0.28% 00:37:36.410 cpu : usr=98.90%, sys=0.79%, ctx=14, majf=0, minf=9 00:37:36.410 IO depths : 1=4.9%, 2=10.1%, 4=22.2%, 8=55.2%, 16=7.7%, 32=0.0%, >=64=0.0% 00:37:36.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.410 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.410 issued rwts: total=5069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.410 filename0: (groupid=0, jobs=1): err= 0: pid=561244: Thu May 16 09:50:28 2024 00:37:36.410 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.5MiB/10019msec) 00:37:36.410 slat (usec): min=5, max=116, avg=18.70, stdev=13.99 00:37:36.410 clat (usec): min=13806, max=39969, avg=32013.55, stdev=1629.71 00:37:36.410 lat (usec): min=13813, max=39976, avg=32032.24, stdev=1629.39 00:37:36.410 clat percentiles (usec): 00:37:36.410 | 1.00th=[25035], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:37:36.410 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:36.410 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:37:36.410 | 99.00th=[33817], 99.50th=[34866], 99.90th=[39584], 99.95th=[40109], 00:37:36.410 | 99.99th=[40109] 00:37:36.410 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1986.40, stdev=64.06, samples=20 00:37:36.410 iops : min= 480, max= 512, avg=496.60, stdev=16.01, samples=20 00:37:36.410 lat (msec) : 20=0.44%, 50=99.56% 00:37:36.410 cpu : usr=99.24%, sys=0.45%, ctx=15, majf=0, minf=9 00:37:36.410 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:36.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.410 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.410 issued rwts: total=4982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.410 filename0: (groupid=0, jobs=1): err= 0: pid=561245: Thu May 16 09:50:28 2024 00:37:36.410 read: IOPS=494, BW=1976KiB/s (2023kB/s)(19.3MiB/10008msec) 00:37:36.410 slat (usec): min=6, max=399, avg=31.90, stdev=20.43 00:37:36.410 clat (usec): min=29360, max=52903, avg=32066.08, stdev=1297.93 00:37:36.410 lat (usec): min=29371, max=52935, avg=32097.99, stdev=1298.18 00:37:36.410 clat percentiles (usec): 00:37:36.410 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31327], 20.00th=[31589], 00:37:36.410 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:36.410 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:37:36.410 | 99.00th=[33424], 99.50th=[33817], 99.90th=[52691], 99.95th=[52691], 00:37:36.410 | 99.99th=[52691] 00:37:36.410 bw ( KiB/s): min= 1916, max= 2048, per=4.12%, avg=1973.68, stdev=65.12, samples=19 00:37:36.410 iops : min= 479, max= 512, avg=493.42, stdev=16.28, samples=19 00:37:36.410 lat (msec) : 50=99.68%, 100=0.32% 00:37:36.410 cpu : usr=98.99%, sys=0.65%, ctx=54, majf=0, minf=9 00:37:36.410 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:36.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.410 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.410 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.410 filename0: (groupid=0, jobs=1): err= 0: pid=561246: Thu May 16 09:50:28 2024 00:37:36.410 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.4MiB/10006msec) 00:37:36.410 slat (usec): min=5, max=121, avg=16.41, stdev=15.98 00:37:36.410 clat (usec): min=6050, max=56642, avg=32107.50, stdev=3152.93 00:37:36.410 lat (usec): min=6059, max=56659, avg=32123.90, stdev=3152.91 00:37:36.410 clat percentiles (usec): 00:37:36.410 | 1.00th=[20579], 5.00th=[28967], 10.00th=[31589], 20.00th=[31851], 00:37:36.410 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:37:36.410 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:37:36.410 | 99.00th=[41681], 99.50th=[47449], 99.90th=[56361], 99.95th=[56886], 00:37:36.411 | 99.99th=[56886] 00:37:36.411 bw ( KiB/s): min= 1843, max= 2016, per=4.14%, avg=1983.32, stdev=40.20, samples=19 00:37:36.411 iops : min= 460, max= 504, avg=495.79, stdev=10.20, samples=19 00:37:36.411 lat (msec) : 10=0.08%, 20=0.90%, 50=98.69%, 100=0.32% 00:37:36.411 cpu : usr=99.15%, sys=0.53%, ctx=17, majf=0, minf=9 00:37:36.411 IO depths : 1=0.1%, 2=0.1%, 4=1.1%, 8=80.6%, 16=18.0%, 32=0.0%, >=64=0.0% 00:37:36.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.411 complete : 0=0.0%, 4=89.5%, 8=10.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.411 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.411 filename0: (groupid=0, jobs=1): err= 0: pid=561248: Thu May 16 09:50:28 2024 00:37:36.411 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10024msec) 00:37:36.411 slat (usec): min=5, max=392, avg=15.97, stdev=11.86 00:37:36.411 clat (usec): min=12319, max=76952, avg=32239.89, stdev=2364.36 00:37:36.411 lat (usec): min=12331, max=76969, avg=32255.86, stdev=2364.05 00:37:36.411 clat percentiles (usec): 00:37:36.411 | 1.00th=[26084], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:37:36.411 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:36.411 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:37:36.411 | 99.00th=[38011], 99.50th=[44303], 99.90th=[77071], 99.95th=[77071], 00:37:36.411 | 99.99th=[77071] 00:37:36.411 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1977.60, stdev=65.33, samples=20 00:37:36.411 iops : min= 480, max= 512, avg=494.40, stdev=16.33, samples=20 00:37:36.411 lat (msec) : 20=0.44%, 50=99.23%, 100=0.32% 00:37:36.411 cpu : usr=98.81%, sys=0.88%, ctx=16, majf=0, minf=9 00:37:36.411 IO depths : 1=5.9%, 2=11.9%, 4=24.2%, 8=51.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:36.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.411 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.411 issued rwts: total=4951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.411 filename0: (groupid=0, jobs=1): err= 0: pid=561249: Thu May 16 09:50:28 2024 00:37:36.411 read: IOPS=493, BW=1974KiB/s (2022kB/s)(19.3MiB/10009msec) 00:37:36.411 slat (usec): min=5, max=399, avg=21.27, stdev=16.75 00:37:36.411 clat (usec): min=17015, max=71487, avg=32249.09, stdev=2797.54 00:37:36.411 lat (usec): min=17025, max=71503, avg=32270.36, stdev=2797.31 00:37:36.411 clat percentiles (usec): 00:37:36.411 | 1.00th=[25560], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:37:36.411 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:36.411 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:37:36.411 | 99.00th=[40109], 99.50th=[44303], 99.90th=[71828], 99.95th=[71828], 00:37:36.411 | 99.99th=[71828] 00:37:36.411 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1967.16, stdev=76.45, samples=19 00:37:36.411 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:37:36.411 lat (msec) : 20=0.53%, 50=99.03%, 100=0.45% 00:37:36.411 cpu : usr=98.86%, sys=0.83%, ctx=28, majf=0, minf=9 00:37:36.411 IO depths : 1=6.1%, 2=12.1%, 4=24.4%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:36.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.411 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.411 issued rwts: total=4940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.411 filename0: (groupid=0, jobs=1): err= 0: pid=561250: Thu May 16 09:50:28 2024 00:37:36.411 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10003msec) 00:37:36.411 slat (usec): min=6, max=400, avg=30.48, stdev=18.30 00:37:36.411 clat (usec): min=14042, max=58825, avg=32089.18, stdev=1828.71 00:37:36.411 lat (usec): min=14072, max=58843, avg=32119.66, stdev=1828.32 00:37:36.411 clat percentiles (usec): 00:37:36.411 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:37:36.411 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:36.411 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:37:36.411 | 99.00th=[33817], 99.50th=[42730], 99.90th=[50594], 99.95th=[54789], 00:37:36.411 | 99.99th=[58983] 00:37:36.411 bw ( KiB/s): min= 1795, max= 2048, per=4.12%, avg=1974.05, stdev=76.01, samples=19 00:37:36.411 iops : min= 448, max= 512, avg=493.47, stdev=19.10, samples=19 00:37:36.411 lat (msec) : 20=0.32%, 50=99.31%, 100=0.36% 00:37:36.411 cpu : usr=98.84%, sys=0.78%, ctx=34, majf=0, minf=9 00:37:36.411 IO depths : 1=5.4%, 2=11.6%, 4=24.9%, 8=51.0%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:36.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.411 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.411 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.411 filename1: (groupid=0, jobs=1): err= 0: pid=561251: Thu May 16 09:50:28 2024 00:37:36.411 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10021msec) 00:37:36.411 slat (usec): min=5, max=114, avg=15.68, stdev=12.37 00:37:36.411 clat (usec): min=12987, max=56707, avg=31865.12, stdev=3166.40 00:37:36.411 lat (usec): min=12996, max=56736, avg=31880.80, stdev=3167.36 00:37:36.411 clat percentiles (usec): 00:37:36.411 | 1.00th=[17171], 5.00th=[26346], 10.00th=[31327], 20.00th=[31851], 00:37:36.411 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:36.411 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:37:36.411 | 99.00th=[41157], 99.50th=[43254], 99.90th=[51643], 99.95th=[51643], 00:37:36.411 | 99.99th=[56886] 00:37:36.411 bw ( KiB/s): min= 1920, max= 2128, per=4.18%, avg=1999.20, stdev=69.35, samples=20 00:37:36.411 iops : min= 480, max= 532, avg=499.80, stdev=17.34, samples=20 00:37:36.411 lat (msec) : 20=1.38%, 50=98.42%, 100=0.20% 00:37:36.411 cpu : usr=99.14%, sys=0.54%, ctx=15, majf=0, minf=9 00:37:36.411 IO depths : 1=1.1%, 2=6.3%, 4=22.2%, 8=59.0%, 16=11.5%, 32=0.0%, >=64=0.0% 00:37:36.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.411 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.411 issued rwts: total=5012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.411 filename1: (groupid=0, jobs=1): err= 0: pid=561252: Thu May 16 09:50:28 2024 00:37:36.411 read: IOPS=498, BW=1994KiB/s (2041kB/s)(19.5MiB/10002msec) 00:37:36.411 slat (nsec): min=5668, max=79764, avg=14246.74, stdev=10280.46 00:37:36.411 clat (usec): min=16044, max=57643, avg=31986.10, stdev=2597.19 00:37:36.411 lat (usec): min=16051, max=57663, avg=32000.34, stdev=2597.72 00:37:36.411 clat percentiles (usec): 00:37:36.411 | 1.00th=[19530], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:37:36.411 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:36.411 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:37:36.411 | 99.00th=[33817], 99.50th=[43254], 99.90th=[57410], 99.95th=[57410], 00:37:36.411 | 99.99th=[57410] 00:37:36.412 bw ( KiB/s): min= 1792, max= 2264, per=4.16%, avg=1991.16, stdev=100.59, samples=19 00:37:36.412 iops : min= 448, max= 566, avg=497.79, stdev=25.15, samples=19 00:37:36.412 lat (msec) : 20=1.30%, 50=98.38%, 100=0.32% 00:37:36.412 cpu : usr=98.84%, sys=0.85%, ctx=16, majf=0, minf=9 00:37:36.412 IO depths : 1=5.7%, 2=11.7%, 4=24.2%, 8=51.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:36.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.412 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.412 issued rwts: total=4985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.412 filename1: (groupid=0, jobs=1): err= 0: pid=561254: Thu May 16 09:50:28 2024 00:37:36.412 read: IOPS=504, BW=2020KiB/s (2068kB/s)(19.8MiB/10014msec) 00:37:36.412 slat (usec): min=5, max=122, avg=17.28, stdev=13.05 00:37:36.412 clat (usec): min=2687, max=34100, avg=31537.94, stdev=4045.46 00:37:36.412 lat (usec): min=2705, max=34109, avg=31555.22, stdev=4044.86 00:37:36.412 clat percentiles (usec): 00:37:36.412 | 1.00th=[ 3359], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:37:36.412 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:36.412 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:37:36.412 | 99.00th=[33817], 99.50th=[33817], 99.90th=[33817], 99.95th=[34341], 00:37:36.412 | 99.99th=[34341] 00:37:36.412 bw ( KiB/s): min= 1920, max= 2688, per=4.21%, avg=2016.00, stdev=170.60, samples=20 00:37:36.412 iops : min= 480, max= 672, avg=504.00, stdev=42.65, samples=20 00:37:36.412 lat (msec) : 4=1.58%, 10=0.32%, 20=0.32%, 50=97.78% 00:37:36.412 cpu : usr=99.18%, sys=0.48%, ctx=18, majf=0, minf=9 00:37:36.412 IO depths : 1=6.2%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:36.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.412 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.412 issued rwts: total=5056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.412 filename1: (groupid=0, jobs=1): err= 0: pid=561255: Thu May 16 09:50:28 2024 00:37:36.412 read: IOPS=500, BW=2003KiB/s (2051kB/s)(19.6MiB/10020msec) 00:37:36.412 slat (nsec): min=5675, max=76457, avg=13639.48, stdev=10007.58 00:37:36.412 clat (usec): min=10462, max=56870, avg=31845.80, stdev=2986.64 00:37:36.412 lat (usec): min=10470, max=56879, avg=31859.44, stdev=2986.96 00:37:36.412 clat percentiles (usec): 00:37:36.412 | 1.00th=[16057], 5.00th=[28967], 10.00th=[31589], 20.00th=[31851], 00:37:36.412 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:37:36.412 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:37:36.412 | 99.00th=[36439], 99.50th=[49021], 99.90th=[53216], 99.95th=[56361], 00:37:36.412 | 99.99th=[56886] 00:37:36.412 bw ( KiB/s): min= 1920, max= 2328, per=4.18%, avg=2000.40, stdev=101.47, samples=20 00:37:36.412 iops : min= 480, max= 582, avg=500.10, stdev=25.37, samples=20 00:37:36.412 lat (msec) : 20=1.69%, 50=97.87%, 100=0.44% 00:37:36.412 cpu : usr=99.17%, sys=0.53%, ctx=15, majf=0, minf=9 00:37:36.412 IO depths : 1=4.9%, 2=10.7%, 4=23.9%, 8=52.8%, 16=7.7%, 32=0.0%, >=64=0.0% 00:37:36.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.412 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.412 issued rwts: total=5017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.412 filename1: (groupid=0, jobs=1): err= 0: pid=561256: Thu May 16 09:50:28 2024 00:37:36.412 read: IOPS=496, BW=1984KiB/s (2032kB/s)(19.4MiB/10003msec) 00:37:36.412 slat (usec): min=5, max=104, avg=25.23, stdev=15.64 00:37:36.412 clat (usec): min=11199, max=70326, avg=32034.16, stdev=3307.62 00:37:36.412 lat (usec): min=11208, max=70345, avg=32059.39, stdev=3308.49 00:37:36.412 clat percentiles (usec): 00:37:36.412 | 1.00th=[19530], 5.00th=[28967], 10.00th=[31327], 20.00th=[31589], 00:37:36.412 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:36.412 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:37:36.412 | 99.00th=[46400], 99.50th=[51119], 99.90th=[55837], 99.95th=[55837], 00:37:36.412 | 99.99th=[70779] 00:37:36.412 bw ( KiB/s): min= 1843, max= 2096, per=4.14%, avg=1981.63, stdev=72.27, samples=19 00:37:36.412 iops : min= 460, max= 524, avg=495.37, stdev=18.15, samples=19 00:37:36.412 lat (msec) : 20=1.31%, 50=98.13%, 100=0.56% 00:37:36.412 cpu : usr=98.94%, sys=0.75%, ctx=14, majf=0, minf=9 00:37:36.412 IO depths : 1=4.9%, 2=9.9%, 4=20.8%, 8=56.2%, 16=8.2%, 32=0.0%, >=64=0.0% 00:37:36.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.412 complete : 0=0.0%, 4=93.1%, 8=1.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.412 issued rwts: total=4962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.412 filename1: (groupid=0, jobs=1): err= 0: pid=561257: Thu May 16 09:50:28 2024 00:37:36.412 read: IOPS=496, BW=1987KiB/s (2035kB/s)(19.4MiB/10004msec) 00:37:36.412 slat (usec): min=5, max=405, avg=27.47, stdev=17.49 00:37:36.412 clat (usec): min=14151, max=60405, avg=31970.95, stdev=3403.14 00:37:36.412 lat (usec): min=14158, max=60423, avg=31998.42, stdev=3404.54 00:37:36.412 clat percentiles (usec): 00:37:36.412 | 1.00th=[18482], 5.00th=[29230], 10.00th=[31327], 20.00th=[31589], 00:37:36.412 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:36.412 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:37:36.412 | 99.00th=[47449], 99.50th=[55837], 99.90th=[60556], 99.95th=[60556], 00:37:36.412 | 99.99th=[60556] 00:37:36.412 bw ( KiB/s): min= 1712, max= 2144, per=4.14%, avg=1984.84, stdev=97.39, samples=19 00:37:36.412 iops : min= 428, max= 536, avg=496.21, stdev=24.35, samples=19 00:37:36.412 lat (msec) : 20=1.09%, 50=98.35%, 100=0.56% 00:37:36.412 cpu : usr=99.00%, sys=0.69%, ctx=14, majf=0, minf=9 00:37:36.412 IO depths : 1=5.3%, 2=10.9%, 4=23.3%, 8=53.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:37:36.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.412 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.412 issued rwts: total=4970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.412 filename1: (groupid=0, jobs=1): err= 0: pid=561258: Thu May 16 09:50:28 2024 00:37:36.412 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10009msec) 00:37:36.412 slat (usec): min=5, max=414, avg=29.66, stdev=16.69 00:37:36.412 clat (usec): min=29372, max=54135, avg=32131.65, stdev=1366.34 00:37:36.412 lat (usec): min=29385, max=54151, avg=32161.30, stdev=1365.20 00:37:36.412 clat percentiles (usec): 00:37:36.412 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:37:36.412 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:36.412 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:37:36.412 | 99.00th=[33424], 99.50th=[33817], 99.90th=[54264], 99.95th=[54264], 00:37:36.412 | 99.99th=[54264] 00:37:36.413 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1973.89, stdev=64.93, samples=19 00:37:36.413 iops : min= 480, max= 512, avg=493.47, stdev=16.23, samples=19 00:37:36.413 lat (msec) : 50=99.68%, 100=0.32% 00:37:36.413 cpu : usr=99.15%, sys=0.55%, ctx=16, majf=0, minf=9 00:37:36.413 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:36.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.413 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.413 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.413 filename1: (groupid=0, jobs=1): err= 0: pid=561259: Thu May 16 09:50:28 2024 00:37:36.413 read: IOPS=505, BW=2021KiB/s (2070kB/s)(19.8MiB/10005msec) 00:37:36.413 slat (usec): min=5, max=133, avg=16.39, stdev=17.08 00:37:36.413 clat (usec): min=5317, max=56380, avg=31517.92, stdev=4064.98 00:37:36.413 lat (usec): min=5324, max=56398, avg=31534.31, stdev=4066.25 00:37:36.413 clat percentiles (usec): 00:37:36.413 | 1.00th=[18744], 5.00th=[23200], 10.00th=[28967], 20.00th=[31589], 00:37:36.413 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:36.413 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:37:36.413 | 99.00th=[44827], 99.50th=[47449], 99.90th=[56361], 99.95th=[56361], 00:37:36.413 | 99.99th=[56361] 00:37:36.413 bw ( KiB/s): min= 1795, max= 2368, per=4.21%, avg=2014.47, stdev=124.54, samples=19 00:37:36.413 iops : min= 448, max= 592, avg=503.58, stdev=31.21, samples=19 00:37:36.413 lat (msec) : 10=0.18%, 20=2.85%, 50=96.66%, 100=0.32% 00:37:36.413 cpu : usr=99.12%, sys=0.58%, ctx=14, majf=0, minf=9 00:37:36.413 IO depths : 1=4.2%, 2=9.1%, 4=20.6%, 8=57.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:37:36.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.413 complete : 0=0.0%, 4=93.0%, 8=1.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.413 issued rwts: total=5056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.413 filename2: (groupid=0, jobs=1): err= 0: pid=561260: Thu May 16 09:50:28 2024 00:37:36.413 read: IOPS=494, BW=1978KiB/s (2025kB/s)(19.3MiB/10012msec) 00:37:36.413 slat (usec): min=5, max=398, avg=28.51, stdev=17.79 00:37:36.413 clat (usec): min=13405, max=60580, avg=32125.61, stdev=1805.08 00:37:36.413 lat (usec): min=13429, max=60604, avg=32154.12, stdev=1804.84 00:37:36.413 clat percentiles (usec): 00:37:36.413 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:37:36.413 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:36.413 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:37:36.413 | 99.00th=[33817], 99.50th=[39060], 99.90th=[57934], 99.95th=[57934], 00:37:36.413 | 99.99th=[60556] 00:37:36.413 bw ( KiB/s): min= 1840, max= 2048, per=4.11%, avg=1969.68, stdev=70.93, samples=19 00:37:36.413 iops : min= 460, max= 512, avg=492.42, stdev=17.73, samples=19 00:37:36.413 lat (msec) : 20=0.32%, 50=99.47%, 100=0.20% 00:37:36.413 cpu : usr=99.00%, sys=0.69%, ctx=15, majf=0, minf=9 00:37:36.413 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:36.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.413 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.413 issued rwts: total=4950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.413 filename2: (groupid=0, jobs=1): err= 0: pid=561261: Thu May 16 09:50:28 2024 00:37:36.413 read: IOPS=496, BW=1986KiB/s (2034kB/s)(19.4MiB/10021msec) 00:37:36.413 slat (nsec): min=5717, max=98431, avg=20805.00, stdev=14674.52 00:37:36.413 clat (usec): min=17013, max=37039, avg=32027.91, stdev=1252.54 00:37:36.413 lat (usec): min=17023, max=37075, avg=32048.72, stdev=1252.75 00:37:36.413 clat percentiles (usec): 00:37:36.413 | 1.00th=[27657], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:37:36.413 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:36.413 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32637], 95.00th=[32900], 00:37:36.413 | 99.00th=[33817], 99.50th=[33817], 99.90th=[36963], 99.95th=[36963], 00:37:36.413 | 99.99th=[36963] 00:37:36.413 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1984.00, stdev=64.21, samples=20 00:37:36.413 iops : min= 480, max= 512, avg=496.00, stdev=16.05, samples=20 00:37:36.413 lat (msec) : 20=0.32%, 50=99.68% 00:37:36.413 cpu : usr=99.20%, sys=0.46%, ctx=19, majf=0, minf=9 00:37:36.413 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:36.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.413 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.413 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.413 filename2: (groupid=0, jobs=1): err= 0: pid=561263: Thu May 16 09:50:28 2024 00:37:36.413 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.3MiB/10001msec) 00:37:36.413 slat (nsec): min=5834, max=88510, avg=22394.13, stdev=13884.00 00:37:36.413 clat (usec): min=14223, max=57290, avg=32160.34, stdev=1883.21 00:37:36.413 lat (usec): min=14235, max=57308, avg=32182.74, stdev=1882.51 00:37:36.413 clat percentiles (usec): 00:37:36.413 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:37:36.413 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:36.413 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:37:36.413 | 99.00th=[33424], 99.50th=[40633], 99.90th=[57410], 99.95th=[57410], 00:37:36.413 | 99.99th=[57410] 00:37:36.413 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1973.89, stdev=77.69, samples=19 00:37:36.413 iops : min= 448, max= 512, avg=493.47, stdev=19.42, samples=19 00:37:36.413 lat (msec) : 20=0.32%, 50=99.35%, 100=0.32% 00:37:36.413 cpu : usr=99.15%, sys=0.55%, ctx=14, majf=0, minf=9 00:37:36.413 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:36.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.413 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.413 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.413 filename2: (groupid=0, jobs=1): err= 0: pid=561264: Thu May 16 09:50:28 2024 00:37:36.413 read: IOPS=493, BW=1975KiB/s (2022kB/s)(19.3MiB/10013msec) 00:37:36.413 slat (usec): min=5, max=401, avg=15.48, stdev=13.02 00:37:36.413 clat (usec): min=28979, max=54370, avg=32277.37, stdev=1378.10 00:37:36.413 lat (usec): min=29017, max=54389, avg=32292.85, stdev=1376.85 00:37:36.413 clat percentiles (usec): 00:37:36.413 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:37:36.413 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:37:36.413 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:37:36.414 | 99.00th=[33817], 99.50th=[34341], 99.90th=[54264], 99.95th=[54264], 00:37:36.414 | 99.99th=[54264] 00:37:36.414 bw ( KiB/s): min= 1795, max= 2048, per=4.12%, avg=1971.35, stdev=76.21, samples=20 00:37:36.414 iops : min= 448, max= 512, avg=492.80, stdev=19.14, samples=20 00:37:36.414 lat (msec) : 50=99.68%, 100=0.32% 00:37:36.414 cpu : usr=99.13%, sys=0.57%, ctx=14, majf=0, minf=9 00:37:36.414 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:36.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.414 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.414 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.414 filename2: (groupid=0, jobs=1): err= 0: pid=561265: Thu May 16 09:50:28 2024 00:37:36.414 read: IOPS=507, BW=2029KiB/s (2078kB/s)(19.9MiB/10021msec) 00:37:36.414 slat (nsec): min=5669, max=92664, avg=16055.13, stdev=11812.39 00:37:36.414 clat (usec): min=11113, max=46508, avg=31402.92, stdev=3600.39 00:37:36.414 lat (usec): min=11120, max=46514, avg=31418.98, stdev=3602.14 00:37:36.414 clat percentiles (usec): 00:37:36.414 | 1.00th=[12780], 5.00th=[25035], 10.00th=[29230], 20.00th=[31589], 00:37:36.414 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:36.414 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:37:36.414 | 99.00th=[40109], 99.50th=[40633], 99.90th=[41157], 99.95th=[46400], 00:37:36.414 | 99.99th=[46400] 00:37:36.414 bw ( KiB/s): min= 1920, max= 2320, per=4.23%, avg=2027.20, stdev=112.46, samples=20 00:37:36.414 iops : min= 480, max= 580, avg=506.80, stdev=28.12, samples=20 00:37:36.414 lat (msec) : 20=2.56%, 50=97.44% 00:37:36.414 cpu : usr=99.02%, sys=0.66%, ctx=14, majf=0, minf=9 00:37:36.414 IO depths : 1=5.0%, 2=10.2%, 4=22.0%, 8=55.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:37:36.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.414 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.414 issued rwts: total=5084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.414 filename2: (groupid=0, jobs=1): err= 0: pid=561266: Thu May 16 09:50:28 2024 00:37:36.414 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10009msec) 00:37:36.414 slat (usec): min=5, max=107, avg=30.90, stdev=20.72 00:37:36.414 clat (usec): min=15483, max=76955, avg=32099.45, stdev=1995.06 00:37:36.414 lat (usec): min=15492, max=76972, avg=32130.36, stdev=1995.04 00:37:36.414 clat percentiles (usec): 00:37:36.414 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31327], 20.00th=[31589], 00:37:36.414 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:36.414 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:37:36.414 | 99.00th=[33424], 99.50th=[33817], 99.90th=[77071], 99.95th=[77071], 00:37:36.414 | 99.99th=[77071] 00:37:36.414 bw ( KiB/s): min= 1880, max= 2048, per=4.12%, avg=1973.89, stdev=66.28, samples=19 00:37:36.414 iops : min= 470, max= 512, avg=493.47, stdev=16.57, samples=19 00:37:36.414 lat (msec) : 20=0.14%, 50=99.53%, 100=0.32% 00:37:36.414 cpu : usr=99.28%, sys=0.39%, ctx=54, majf=0, minf=9 00:37:36.414 IO depths : 1=5.7%, 2=11.6%, 4=24.1%, 8=51.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:37:36.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.414 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.414 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.414 filename2: (groupid=0, jobs=1): err= 0: pid=561267: Thu May 16 09:50:28 2024 00:37:36.414 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.4MiB/10005msec) 00:37:36.414 slat (usec): min=6, max=132, avg=30.23, stdev=18.61 00:37:36.414 clat (usec): min=5867, max=55632, avg=31965.21, stdev=2298.00 00:37:36.414 lat (usec): min=5875, max=55654, avg=31995.44, stdev=2299.08 00:37:36.414 clat percentiles (usec): 00:37:36.414 | 1.00th=[30802], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:37:36.414 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:37:36.414 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:37:36.414 | 99.00th=[33424], 99.50th=[33817], 99.90th=[55837], 99.95th=[55837], 00:37:36.414 | 99.99th=[55837] 00:37:36.414 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1973.89, stdev=77.69, samples=19 00:37:36.414 iops : min= 448, max= 512, avg=493.47, stdev=19.42, samples=19 00:37:36.414 lat (msec) : 10=0.32%, 20=0.32%, 50=99.03%, 100=0.32% 00:37:36.414 cpu : usr=99.07%, sys=0.60%, ctx=39, majf=0, minf=9 00:37:36.414 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:36.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.414 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.414 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.414 filename2: (groupid=0, jobs=1): err= 0: pid=561268: Thu May 16 09:50:28 2024 00:37:36.414 read: IOPS=498, BW=1993KiB/s (2041kB/s)(19.5MiB/10005msec) 00:37:36.414 slat (nsec): min=5663, max=92364, avg=13888.33, stdev=11313.17 00:37:36.414 clat (usec): min=7840, max=70626, avg=32009.98, stdev=4838.69 00:37:36.414 lat (usec): min=7847, max=70644, avg=32023.87, stdev=4838.81 00:37:36.414 clat percentiles (usec): 00:37:36.414 | 1.00th=[16581], 5.00th=[23725], 10.00th=[26870], 20.00th=[31589], 00:37:36.414 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:37:36.414 | 70.00th=[32375], 80.00th=[32900], 90.00th=[35914], 95.00th=[40109], 00:37:36.414 | 99.00th=[49546], 99.50th=[52691], 99.90th=[57410], 99.95th=[70779], 00:37:36.414 | 99.99th=[70779] 00:37:36.414 bw ( KiB/s): min= 1792, max= 2160, per=4.15%, avg=1988.21, stdev=88.54, samples=19 00:37:36.414 iops : min= 448, max= 540, avg=497.05, stdev=22.13, samples=19 00:37:36.414 lat (msec) : 10=0.12%, 20=2.07%, 50=97.05%, 100=0.76% 00:37:36.414 cpu : usr=99.12%, sys=0.57%, ctx=13, majf=0, minf=9 00:37:36.414 IO depths : 1=1.9%, 2=4.0%, 4=10.6%, 8=70.6%, 16=13.0%, 32=0.0%, >=64=0.0% 00:37:36.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.414 complete : 0=0.0%, 4=90.8%, 8=5.8%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.414 issued rwts: total=4986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:36.414 00:37:36.414 Run status group 0 (all jobs): 00:37:36.414 READ: bw=46.7MiB/s (49.0MB/s), 1974KiB/s-2142KiB/s (2022kB/s-2193kB/s), io=469MiB (491MB), run=10001-10024msec 00:37:36.414 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:36.414 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:36.414 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:36.414 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:36.414 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:36.414 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:36.414 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.415 bdev_null0 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.415 [2024-05-16 09:50:28.658805] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.415 bdev_null1 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:36.415 09:50:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:36.416 { 00:37:36.416 "params": { 00:37:36.416 "name": "Nvme$subsystem", 00:37:36.416 "trtype": "$TEST_TRANSPORT", 00:37:36.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:36.416 "adrfam": "ipv4", 00:37:36.416 "trsvcid": "$NVMF_PORT", 00:37:36.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:36.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:36.416 "hdgst": ${hdgst:-false}, 00:37:36.416 "ddgst": ${ddgst:-false} 00:37:36.416 }, 00:37:36.416 "method": "bdev_nvme_attach_controller" 00:37:36.416 } 00:37:36.416 EOF 00:37:36.416 )") 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:36.416 { 00:37:36.416 "params": { 00:37:36.416 "name": "Nvme$subsystem", 00:37:36.416 "trtype": "$TEST_TRANSPORT", 00:37:36.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:36.416 "adrfam": "ipv4", 00:37:36.416 "trsvcid": "$NVMF_PORT", 00:37:36.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:36.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:36.416 "hdgst": ${hdgst:-false}, 00:37:36.416 "ddgst": ${ddgst:-false} 00:37:36.416 }, 00:37:36.416 "method": "bdev_nvme_attach_controller" 00:37:36.416 } 00:37:36.416 EOF 00:37:36.416 )") 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:36.416 "params": { 00:37:36.416 "name": "Nvme0", 00:37:36.416 "trtype": "tcp", 00:37:36.416 "traddr": "10.0.0.2", 00:37:36.416 "adrfam": "ipv4", 00:37:36.416 "trsvcid": "4420", 00:37:36.416 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:36.416 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:36.416 "hdgst": false, 00:37:36.416 "ddgst": false 00:37:36.416 }, 00:37:36.416 "method": "bdev_nvme_attach_controller" 00:37:36.416 },{ 00:37:36.416 "params": { 00:37:36.416 "name": "Nvme1", 00:37:36.416 "trtype": "tcp", 00:37:36.416 "traddr": "10.0.0.2", 00:37:36.416 "adrfam": "ipv4", 00:37:36.416 "trsvcid": "4420", 00:37:36.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:36.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:36.416 "hdgst": false, 00:37:36.416 "ddgst": false 00:37:36.416 }, 00:37:36.416 "method": "bdev_nvme_attach_controller" 00:37:36.416 }' 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:36.416 09:50:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:36.416 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:36.416 ... 00:37:36.416 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:36.416 ... 00:37:36.416 fio-3.35 00:37:36.416 Starting 4 threads 00:37:36.416 EAL: No free 2048 kB hugepages reported on node 1 00:37:41.710 00:37:41.710 filename0: (groupid=0, jobs=1): err= 0: pid=563652: Thu May 16 09:50:34 2024 00:37:41.710 read: IOPS=2124, BW=16.6MiB/s (17.4MB/s)(83.0MiB/5002msec) 00:37:41.710 slat (nsec): min=5695, max=71505, avg=8710.67, stdev=3387.96 00:37:41.710 clat (usec): min=1872, max=6164, avg=3741.56, stdev=468.76 00:37:41.710 lat (usec): min=1891, max=6173, avg=3750.27, stdev=468.67 00:37:41.710 clat percentiles (usec): 00:37:41.710 | 1.00th=[ 2802], 5.00th=[ 3130], 10.00th=[ 3359], 20.00th=[ 3523], 00:37:41.710 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3752], 00:37:41.710 | 70.00th=[ 3785], 80.00th=[ 3785], 90.00th=[ 4113], 95.00th=[ 4883], 00:37:41.710 | 99.00th=[ 5407], 99.50th=[ 5604], 99.90th=[ 5932], 99.95th=[ 6063], 00:37:41.710 | 99.99th=[ 6128] 00:37:41.710 bw ( KiB/s): min=16480, max=17312, per=25.14%, avg=16965.33, stdev=280.11, samples=9 00:37:41.710 iops : min= 2060, max= 2164, avg=2120.67, stdev=35.01, samples=9 00:37:41.710 lat (msec) : 2=0.03%, 4=88.16%, 10=11.81% 00:37:41.710 cpu : usr=97.52%, sys=2.22%, ctx=7, majf=0, minf=79 00:37:41.710 IO depths : 1=0.1%, 2=0.3%, 4=72.5%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:41.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.710 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.710 issued rwts: total=10626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.710 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:41.710 filename0: (groupid=0, jobs=1): err= 0: pid=563653: Thu May 16 09:50:34 2024 00:37:41.710 read: IOPS=2159, BW=16.9MiB/s (17.7MB/s)(84.4MiB/5003msec) 00:37:41.711 slat (nsec): min=2881, max=42077, avg=6859.31, stdev=2499.85 00:37:41.711 clat (usec): min=1691, max=10450, avg=3688.49, stdev=435.90 00:37:41.711 lat (usec): min=1699, max=10464, avg=3695.35, stdev=435.85 00:37:41.711 clat percentiles (usec): 00:37:41.711 | 1.00th=[ 2769], 5.00th=[ 3130], 10.00th=[ 3326], 20.00th=[ 3523], 00:37:41.711 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3720], 60.00th=[ 3752], 00:37:41.711 | 70.00th=[ 3785], 80.00th=[ 3785], 90.00th=[ 3851], 95.00th=[ 4146], 00:37:41.711 | 99.00th=[ 5407], 99.50th=[ 5473], 99.90th=[ 5997], 99.95th=[10159], 00:37:41.711 | 99.99th=[10421] 00:37:41.711 bw ( KiB/s): min=16816, max=17680, per=25.57%, avg=17256.89, stdev=256.64, samples=9 00:37:41.711 iops : min= 2102, max= 2210, avg=2157.11, stdev=32.08, samples=9 00:37:41.711 lat (msec) : 2=0.18%, 4=91.93%, 10=7.82%, 20=0.07% 00:37:41.711 cpu : usr=97.68%, sys=2.06%, ctx=6, majf=0, minf=86 00:37:41.711 IO depths : 1=0.1%, 2=0.4%, 4=65.2%, 8=34.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:41.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.711 complete : 0=0.0%, 4=97.9%, 8=2.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.711 issued rwts: total=10803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.711 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:41.711 filename1: (groupid=0, jobs=1): err= 0: pid=563654: Thu May 16 09:50:34 2024 00:37:41.711 read: IOPS=2125, BW=16.6MiB/s (17.4MB/s)(83.1MiB/5003msec) 00:37:41.711 slat (nsec): min=5648, max=87695, avg=8510.11, stdev=3614.41 00:37:41.711 clat (usec): min=2041, max=6356, avg=3744.37, stdev=409.88 00:37:41.711 lat (usec): min=2051, max=6361, avg=3752.88, stdev=409.67 00:37:41.711 clat percentiles (usec): 00:37:41.711 | 1.00th=[ 3032], 5.00th=[ 3326], 10.00th=[ 3458], 20.00th=[ 3523], 00:37:41.711 | 30.00th=[ 3556], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3752], 00:37:41.711 | 70.00th=[ 3785], 80.00th=[ 3785], 90.00th=[ 3982], 95.00th=[ 4293], 00:37:41.711 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 5997], 99.95th=[ 6194], 00:37:41.711 | 99.99th=[ 6325] 00:37:41.711 bw ( KiB/s): min=16384, max=17440, per=25.20%, avg=17008.10, stdev=353.20, samples=10 00:37:41.711 iops : min= 2048, max= 2180, avg=2126.00, stdev=44.15, samples=10 00:37:41.711 lat (msec) : 4=90.60%, 10=9.40% 00:37:41.711 cpu : usr=97.88%, sys=1.86%, ctx=6, majf=0, minf=71 00:37:41.711 IO depths : 1=0.1%, 2=0.1%, 4=65.9%, 8=34.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:41.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.711 complete : 0=0.0%, 4=97.6%, 8=2.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.711 issued rwts: total=10633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.711 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:41.711 filename1: (groupid=0, jobs=1): err= 0: pid=563655: Thu May 16 09:50:34 2024 00:37:41.711 read: IOPS=2028, BW=15.8MiB/s (16.6MB/s)(79.2MiB/5001msec) 00:37:41.711 slat (nsec): min=5642, max=86088, avg=6671.65, stdev=2751.00 00:37:41.711 clat (usec): min=955, max=7985, avg=3926.29, stdev=669.84 00:37:41.711 lat (usec): min=961, max=8018, avg=3932.97, stdev=669.61 00:37:41.711 clat percentiles (usec): 00:37:41.711 | 1.00th=[ 3097], 5.00th=[ 3392], 10.00th=[ 3458], 20.00th=[ 3589], 00:37:41.711 | 30.00th=[ 3654], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3752], 00:37:41.711 | 70.00th=[ 3785], 80.00th=[ 3916], 90.00th=[ 5407], 95.00th=[ 5604], 00:37:41.711 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 6456], 99.95th=[ 7177], 00:37:41.711 | 99.99th=[ 7898] 00:37:41.711 bw ( KiB/s): min=15968, max=16880, per=23.99%, avg=16190.11, stdev=294.71, samples=9 00:37:41.711 iops : min= 1996, max= 2110, avg=2023.67, stdev=36.91, samples=9 00:37:41.711 lat (usec) : 1000=0.03% 00:37:41.711 lat (msec) : 2=0.11%, 4=81.46%, 10=18.41% 00:37:41.711 cpu : usr=97.92%, sys=1.86%, ctx=8, majf=0, minf=109 00:37:41.711 IO depths : 1=0.1%, 2=0.1%, 4=72.9%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:41.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.711 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.711 issued rwts: total=10143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.711 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:41.711 00:37:41.711 Run status group 0 (all jobs): 00:37:41.711 READ: bw=65.9MiB/s (69.1MB/s), 15.8MiB/s-16.9MiB/s (16.6MB/s-17.7MB/s), io=330MiB (346MB), run=5001-5003msec 00:37:41.711 09:50:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:41.711 09:50:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:41.711 09:50:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:41.711 09:50:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:41.711 09:50:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:41.711 09:50:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:41.711 09:50:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.711 09:50:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.711 09:50:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.711 09:50:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:41.711 09:50:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.711 09:50:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.711 09:50:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.711 09:50:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:41.711 09:50:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:41.711 09:50:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:41.711 09:50:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:41.711 09:50:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.711 09:50:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.711 09:50:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.711 09:50:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:41.711 09:50:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.711 09:50:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.711 09:50:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.711 00:37:41.711 real 0m24.531s 00:37:41.711 user 5m19.899s 00:37:41.711 sys 0m3.692s 00:37:41.711 09:50:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:41.711 09:50:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:41.711 ************************************ 00:37:41.711 END TEST fio_dif_rand_params 00:37:41.711 ************************************ 00:37:41.711 09:50:35 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:41.711 09:50:35 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:41.711 09:50:35 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:41.711 09:50:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:41.711 ************************************ 00:37:41.711 START TEST fio_dif_digest 00:37:41.711 ************************************ 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:41.711 bdev_null0 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:41.711 [2024-05-16 09:50:35.164750] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:41.711 09:50:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:41.711 { 00:37:41.712 "params": { 00:37:41.712 "name": "Nvme$subsystem", 00:37:41.712 "trtype": "$TEST_TRANSPORT", 00:37:41.712 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:41.712 "adrfam": "ipv4", 00:37:41.712 "trsvcid": "$NVMF_PORT", 00:37:41.712 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:41.712 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:41.712 "hdgst": ${hdgst:-false}, 00:37:41.712 "ddgst": ${ddgst:-false} 00:37:41.712 }, 00:37:41.712 "method": "bdev_nvme_attach_controller" 00:37:41.712 } 00:37:41.712 EOF 00:37:41.712 )") 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:41.712 "params": { 00:37:41.712 "name": "Nvme0", 00:37:41.712 "trtype": "tcp", 00:37:41.712 "traddr": "10.0.0.2", 00:37:41.712 "adrfam": "ipv4", 00:37:41.712 "trsvcid": "4420", 00:37:41.712 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:41.712 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:41.712 "hdgst": true, 00:37:41.712 "ddgst": true 00:37:41.712 }, 00:37:41.712 "method": "bdev_nvme_attach_controller" 00:37:41.712 }' 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:41.712 09:50:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:42.309 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:42.309 ... 00:37:42.309 fio-3.35 00:37:42.309 Starting 3 threads 00:37:42.309 EAL: No free 2048 kB hugepages reported on node 1 00:37:54.536 00:37:54.536 filename0: (groupid=0, jobs=1): err= 0: pid=565121: Thu May 16 09:50:46 2024 00:37:54.536 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(302MiB/10007msec) 00:37:54.536 slat (nsec): min=6063, max=49217, avg=8459.39, stdev=2054.71 00:37:54.536 clat (usec): min=6595, max=17868, avg=12409.48, stdev=2105.35 00:37:54.536 lat (usec): min=6602, max=17875, avg=12417.93, stdev=2105.37 00:37:54.536 clat percentiles (usec): 00:37:54.536 | 1.00th=[ 7767], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[10028], 00:37:54.536 | 30.00th=[11338], 40.00th=[12387], 50.00th=[12911], 60.00th=[13304], 00:37:54.536 | 70.00th=[13698], 80.00th=[14222], 90.00th=[14746], 95.00th=[15270], 00:37:54.536 | 99.00th=[16188], 99.50th=[16581], 99.90th=[17695], 99.95th=[17695], 00:37:54.536 | 99.99th=[17957] 00:37:54.536 bw ( KiB/s): min=28672, max=33792, per=37.03%, avg=30912.00, stdev=1571.24, samples=20 00:37:54.536 iops : min= 224, max= 264, avg=241.50, stdev=12.28, samples=20 00:37:54.536 lat (msec) : 10=19.36%, 20=80.64% 00:37:54.536 cpu : usr=96.17%, sys=3.60%, ctx=20, majf=0, minf=186 00:37:54.537 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:54.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.537 issued rwts: total=2417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:54.537 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:54.537 filename0: (groupid=0, jobs=1): err= 0: pid=565122: Thu May 16 09:50:46 2024 00:37:54.537 read: IOPS=186, BW=23.3MiB/s (24.4MB/s)(234MiB/10045msec) 00:37:54.537 slat (nsec): min=5987, max=36676, avg=8699.86, stdev=1636.41 00:37:54.537 clat (usec): min=7631, max=95301, avg=16072.87, stdev=11693.61 00:37:54.537 lat (usec): min=7639, max=95308, avg=16081.57, stdev=11693.66 00:37:54.537 clat percentiles (usec): 00:37:54.537 | 1.00th=[ 8848], 5.00th=[10552], 10.00th=[11469], 20.00th=[12125], 00:37:54.537 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:37:54.537 | 70.00th=[13566], 80.00th=[14091], 90.00th=[15008], 95.00th=[53216], 00:37:54.537 | 99.00th=[55837], 99.50th=[57410], 99.90th=[94897], 99.95th=[94897], 00:37:54.537 | 99.99th=[94897] 00:37:54.537 bw ( KiB/s): min=19200, max=27136, per=28.66%, avg=23923.20, stdev=2227.85, samples=20 00:37:54.537 iops : min= 150, max= 212, avg=186.90, stdev=17.41, samples=20 00:37:54.537 lat (msec) : 10=3.96%, 20=88.46%, 50=0.11%, 100=7.48% 00:37:54.537 cpu : usr=96.40%, sys=3.38%, ctx=21, majf=0, minf=91 00:37:54.537 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:54.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.537 issued rwts: total=1871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:54.537 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:54.537 filename0: (groupid=0, jobs=1): err= 0: pid=565123: Thu May 16 09:50:46 2024 00:37:54.537 read: IOPS=225, BW=28.2MiB/s (29.5MB/s)(283MiB/10047msec) 00:37:54.537 slat (nsec): min=6036, max=36220, avg=8695.01, stdev=2212.03 00:37:54.537 clat (usec): min=7626, max=56068, avg=13283.20, stdev=4786.69 00:37:54.537 lat (usec): min=7633, max=56083, avg=13291.89, stdev=4787.00 00:37:54.537 clat percentiles (usec): 00:37:54.537 | 1.00th=[ 8356], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10683], 00:37:54.537 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13304], 60.00th=[13698], 00:37:54.537 | 70.00th=[14091], 80.00th=[14615], 90.00th=[15139], 95.00th=[15664], 00:37:54.537 | 99.00th=[52167], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:37:54.537 | 99.99th=[55837] 00:37:54.537 bw ( KiB/s): min=24576, max=32256, per=34.69%, avg=28953.60, stdev=2291.09, samples=20 00:37:54.537 iops : min= 192, max= 252, avg=226.20, stdev=17.90, samples=20 00:37:54.537 lat (msec) : 10=14.18%, 20=84.67%, 50=0.09%, 100=1.06% 00:37:54.537 cpu : usr=95.35%, sys=3.75%, ctx=476, majf=0, minf=179 00:37:54.537 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:54.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.537 issued rwts: total=2264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:54.537 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:54.537 00:37:54.537 Run status group 0 (all jobs): 00:37:54.537 READ: bw=81.5MiB/s (85.5MB/s), 23.3MiB/s-30.2MiB/s (24.4MB/s-31.7MB/s), io=819MiB (859MB), run=10007-10047msec 00:37:54.537 09:50:46 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:54.537 09:50:46 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:54.537 09:50:46 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:54.537 09:50:46 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:54.537 09:50:46 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:54.537 09:50:46 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:54.537 09:50:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.537 09:50:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:54.537 09:50:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.537 09:50:46 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:54.537 09:50:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.537 09:50:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:54.537 09:50:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.537 00:37:54.537 real 0m11.325s 00:37:54.537 user 0m45.430s 00:37:54.537 sys 0m1.414s 00:37:54.537 09:50:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:54.537 09:50:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:54.537 ************************************ 00:37:54.537 END TEST fio_dif_digest 00:37:54.537 ************************************ 00:37:54.537 09:50:46 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:54.537 09:50:46 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:54.537 09:50:46 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:54.537 09:50:46 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:37:54.537 09:50:46 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:54.537 09:50:46 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:37:54.537 09:50:46 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:54.537 09:50:46 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:54.537 rmmod nvme_tcp 00:37:54.537 rmmod nvme_fabrics 00:37:54.537 rmmod nvme_keyring 00:37:54.537 09:50:46 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:54.537 09:50:46 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:37:54.537 09:50:46 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:37:54.537 09:50:46 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 554639 ']' 00:37:54.537 09:50:46 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 554639 00:37:54.537 09:50:46 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 554639 ']' 00:37:54.537 09:50:46 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 554639 00:37:54.537 09:50:46 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:37:54.537 09:50:46 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:54.537 09:50:46 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 554639 00:37:54.537 09:50:46 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:54.537 09:50:46 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:54.537 09:50:46 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 554639' 00:37:54.537 killing process with pid 554639 00:37:54.537 09:50:46 nvmf_dif -- common/autotest_common.sh@965 -- # kill 554639 00:37:54.537 [2024-05-16 09:50:46.621695] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:37:54.537 09:50:46 nvmf_dif -- common/autotest_common.sh@970 -- # wait 554639 00:37:54.537 09:50:46 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:54.537 09:50:46 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:56.450 Waiting for block devices as requested 00:37:56.711 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:56.711 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:56.711 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:56.972 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:56.972 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:56.972 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:57.232 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:57.232 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:57.232 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:57.493 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:57.493 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:57.493 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:57.753 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:57.753 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:57.753 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:57.753 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:58.014 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:58.275 09:50:51 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:58.275 09:50:51 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:58.275 09:50:51 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:58.275 09:50:51 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:58.275 09:50:51 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:58.275 09:50:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:58.275 09:50:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:00.186 09:50:53 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:00.186 00:38:00.186 real 1m17.076s 00:38:00.186 user 8m4.605s 00:38:00.186 sys 0m18.917s 00:38:00.186 09:50:53 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:00.186 09:50:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:00.186 ************************************ 00:38:00.186 END TEST nvmf_dif 00:38:00.186 ************************************ 00:38:00.446 09:50:53 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:00.446 09:50:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:00.446 09:50:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:00.446 09:50:53 -- common/autotest_common.sh@10 -- # set +x 00:38:00.446 ************************************ 00:38:00.446 START TEST nvmf_abort_qd_sizes 00:38:00.446 ************************************ 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:00.446 * Looking for test storage... 00:38:00.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:00.446 09:50:53 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:38:00.447 09:50:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:08.583 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:08.583 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:08.583 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:08.583 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:08.583 09:51:00 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:08.583 09:51:01 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:08.583 09:51:01 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:08.583 09:51:01 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:08.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:08.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.809 ms 00:38:08.583 00:38:08.583 --- 10.0.0.2 ping statistics --- 00:38:08.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:08.583 rtt min/avg/max/mdev = 0.809/0.809/0.809/0.000 ms 00:38:08.583 09:51:01 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:08.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:08.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:38:08.583 00:38:08.583 --- 10.0.0.1 ping statistics --- 00:38:08.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:08.583 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:38:08.583 09:51:01 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:08.583 09:51:01 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:38:08.583 09:51:01 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:38:08.583 09:51:01 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:11.128 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:11.128 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:11.128 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:11.128 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:11.128 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:11.128 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:11.128 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:11.128 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:11.128 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:11.128 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:11.128 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:11.128 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:11.128 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:11.128 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:11.128 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:11.128 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:11.128 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=574584 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 574584 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 574584 ']' 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:11.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:11.389 09:51:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:11.389 [2024-05-16 09:51:04.923378] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:38:11.389 [2024-05-16 09:51:04.923437] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:11.649 EAL: No free 2048 kB hugepages reported on node 1 00:38:11.649 [2024-05-16 09:51:04.993386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:11.649 [2024-05-16 09:51:05.062155] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:11.649 [2024-05-16 09:51:05.062194] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:11.649 [2024-05-16 09:51:05.062202] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:11.649 [2024-05-16 09:51:05.062212] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:11.649 [2024-05-16 09:51:05.062218] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:11.649 [2024-05-16 09:51:05.062361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:11.649 [2024-05-16 09:51:05.062494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:11.649 [2024-05-16 09:51:05.062649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:38:11.649 [2024-05-16 09:51:05.062649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:12.219 09:51:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:38:12.220 09:51:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:12.220 09:51:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:12.220 09:51:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:12.220 09:51:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:12.479 ************************************ 00:38:12.479 START TEST spdk_target_abort 00:38:12.479 ************************************ 00:38:12.479 09:51:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:38:12.479 09:51:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:12.480 09:51:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:38:12.480 09:51:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:12.480 09:51:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:12.740 spdk_targetn1 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:12.740 [2024-05-16 09:51:06.106040] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:12.740 [2024-05-16 09:51:06.142799] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:38:12.740 [2024-05-16 09:51:06.143040] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:12.740 09:51:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:12.740 EAL: No free 2048 kB hugepages reported on node 1 00:38:13.000 [2024-05-16 09:51:06.312538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:608 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:38:13.000 [2024-05-16 09:51:06.312563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:004d p:1 m:0 dnr:0 00:38:13.000 [2024-05-16 09:51:06.313861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:688 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:38:13.000 [2024-05-16 09:51:06.313874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:38:13.000 [2024-05-16 09:51:06.322032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:968 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:38:13.000 [2024-05-16 09:51:06.322049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:007b p:1 m:0 dnr:0 00:38:13.000 [2024-05-16 09:51:06.360552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2392 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:38:13.000 [2024-05-16 09:51:06.360569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:16.300 Initializing NVMe Controllers 00:38:16.300 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:16.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:16.300 Initialization complete. Launching workers. 00:38:16.300 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12834, failed: 4 00:38:16.300 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2751, failed to submit 10087 00:38:16.300 success 749, unsuccess 2002, failed 0 00:38:16.300 09:51:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:16.300 09:51:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:16.300 EAL: No free 2048 kB hugepages reported on node 1 00:38:16.300 [2024-05-16 09:51:09.622151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:296 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:38:16.300 [2024-05-16 09:51:09.622188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:0035 p:1 m:0 dnr:0 00:38:16.300 [2024-05-16 09:51:09.638182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:688 len:8 PRP1 0x200007c58000 PRP2 0x0 00:38:16.300 [2024-05-16 09:51:09.638208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:0063 p:1 m:0 dnr:0 00:38:16.300 [2024-05-16 09:51:09.702610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:2392 len:8 PRP1 0x200007c4a000 PRP2 0x0 00:38:16.300 [2024-05-16 09:51:09.702639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:38:16.561 [2024-05-16 09:51:10.059183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:10488 len:8 PRP1 0x200007c44000 PRP2 0x0 00:38:16.561 [2024-05-16 09:51:10.059216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:17.132 [2024-05-16 09:51:10.645043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:23520 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:38:17.132 [2024-05-16 09:51:10.645093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:18.517 [2024-05-16 09:51:11.685437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:47008 len:8 PRP1 0x200007c54000 PRP2 0x0 00:38:18.517 [2024-05-16 09:51:11.685470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:00fe p:1 m:0 dnr:0 00:38:19.460 Initializing NVMe Controllers 00:38:19.460 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:19.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:19.460 Initialization complete. Launching workers. 00:38:19.460 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8536, failed: 6 00:38:19.460 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1241, failed to submit 7301 00:38:19.460 success 333, unsuccess 908, failed 0 00:38:19.460 09:51:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:19.460 09:51:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:19.460 EAL: No free 2048 kB hugepages reported on node 1 00:38:22.021 [2024-05-16 09:51:15.095109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:150 nsid:1 lba:245464 len:8 PRP1 0x200007904000 PRP2 0x0 00:38:22.021 [2024-05-16 09:51:15.095141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:150 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:38:22.594 Initializing NVMe Controllers 00:38:22.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:22.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:22.594 Initialization complete. Launching workers. 00:38:22.594 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42184, failed: 1 00:38:22.594 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2756, failed to submit 39429 00:38:22.594 success 591, unsuccess 2165, failed 0 00:38:22.594 09:51:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:22.594 09:51:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:22.594 09:51:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:22.594 09:51:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:22.594 09:51:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:22.594 09:51:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:22.594 09:51:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:24.508 09:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:24.508 09:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 574584 00:38:24.508 09:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 574584 ']' 00:38:24.508 09:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 574584 00:38:24.508 09:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:38:24.508 09:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:24.508 09:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 574584 00:38:24.508 09:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:38:24.508 09:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:38:24.508 09:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 574584' 00:38:24.508 killing process with pid 574584 00:38:24.508 09:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 574584 00:38:24.508 [2024-05-16 09:51:17.851282] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:38:24.508 09:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 574584 00:38:24.508 00:38:24.508 real 0m12.189s 00:38:24.508 user 0m49.802s 00:38:24.508 sys 0m1.629s 00:38:24.508 09:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:24.508 09:51:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:24.508 ************************************ 00:38:24.508 END TEST spdk_target_abort 00:38:24.508 ************************************ 00:38:24.508 09:51:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:24.508 09:51:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:24.508 09:51:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:24.508 09:51:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:24.769 ************************************ 00:38:24.769 START TEST kernel_target_abort 00:38:24.769 ************************************ 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:24.769 09:51:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:28.072 Waiting for block devices as requested 00:38:28.072 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:28.072 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:28.072 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:28.332 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:28.332 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:28.332 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:28.591 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:28.591 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:28.591 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:28.853 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:28.853 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:28.853 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:29.112 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:29.112 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:29.112 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:29.112 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:29.373 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:29.634 09:51:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:38:29.634 09:51:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:29.634 09:51:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:38:29.634 09:51:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:38:29.634 09:51:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:29.634 09:51:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:38:29.634 09:51:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:38:29.634 09:51:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:29.634 No valid GPT data, bailing 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.1 -t tcp -s 4420 00:38:29.634 00:38:29.634 Discovery Log Number of Records 2, Generation counter 2 00:38:29.634 =====Discovery Log Entry 0====== 00:38:29.634 trtype: tcp 00:38:29.634 adrfam: ipv4 00:38:29.634 subtype: current discovery subsystem 00:38:29.634 treq: not specified, sq flow control disable supported 00:38:29.634 portid: 1 00:38:29.634 trsvcid: 4420 00:38:29.634 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:29.634 traddr: 10.0.0.1 00:38:29.634 eflags: none 00:38:29.634 sectype: none 00:38:29.634 =====Discovery Log Entry 1====== 00:38:29.634 trtype: tcp 00:38:29.634 adrfam: ipv4 00:38:29.634 subtype: nvme subsystem 00:38:29.634 treq: not specified, sq flow control disable supported 00:38:29.634 portid: 1 00:38:29.634 trsvcid: 4420 00:38:29.634 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:29.634 traddr: 10.0.0.1 00:38:29.634 eflags: none 00:38:29.634 sectype: none 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:29.634 09:51:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:29.894 EAL: No free 2048 kB hugepages reported on node 1 00:38:33.193 Initializing NVMe Controllers 00:38:33.193 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:33.193 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:33.193 Initialization complete. Launching workers. 00:38:33.193 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69564, failed: 0 00:38:33.193 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 69564, failed to submit 0 00:38:33.193 success 0, unsuccess 69564, failed 0 00:38:33.193 09:51:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:33.193 09:51:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:33.193 EAL: No free 2048 kB hugepages reported on node 1 00:38:36.494 Initializing NVMe Controllers 00:38:36.494 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:36.494 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:36.494 Initialization complete. Launching workers. 00:38:36.494 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 110530, failed: 0 00:38:36.494 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27818, failed to submit 82712 00:38:36.494 success 0, unsuccess 27818, failed 0 00:38:36.494 09:51:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:36.494 09:51:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:36.494 EAL: No free 2048 kB hugepages reported on node 1 00:38:39.038 Initializing NVMe Controllers 00:38:39.038 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:39.038 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:39.038 Initialization complete. Launching workers. 00:38:39.038 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 105660, failed: 0 00:38:39.038 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26422, failed to submit 79238 00:38:39.038 success 0, unsuccess 26422, failed 0 00:38:39.038 09:51:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:39.038 09:51:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:39.038 09:51:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:38:39.038 09:51:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:39.038 09:51:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:39.038 09:51:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:39.038 09:51:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:39.038 09:51:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:38:39.038 09:51:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:38:39.038 09:51:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:42.341 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:42.341 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:42.341 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:42.341 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:42.341 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:42.341 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:42.602 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:42.602 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:42.602 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:42.602 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:42.602 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:42.602 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:42.602 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:42.602 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:42.602 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:42.602 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:44.545 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:44.545 00:38:44.545 real 0m20.018s 00:38:44.545 user 0m9.880s 00:38:44.545 sys 0m5.899s 00:38:44.545 09:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:44.545 09:51:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:44.545 ************************************ 00:38:44.546 END TEST kernel_target_abort 00:38:44.546 ************************************ 00:38:44.807 09:51:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:44.807 09:51:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:44.808 09:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:44.808 09:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:38:44.808 09:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:44.808 09:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:38:44.808 09:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:44.808 09:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:44.808 rmmod nvme_tcp 00:38:44.808 rmmod nvme_fabrics 00:38:44.808 rmmod nvme_keyring 00:38:44.808 09:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:44.808 09:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:38:44.808 09:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:38:44.808 09:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 574584 ']' 00:38:44.808 09:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 574584 00:38:44.808 09:51:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 574584 ']' 00:38:44.808 09:51:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 574584 00:38:44.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (574584) - No such process 00:38:44.808 09:51:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 574584 is not found' 00:38:44.808 Process with pid 574584 is not found 00:38:44.808 09:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:38:44.808 09:51:38 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:48.111 Waiting for block devices as requested 00:38:48.111 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:48.111 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:48.111 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:48.372 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:48.372 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:48.372 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:48.632 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:48.632 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:48.632 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:48.893 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:48.893 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:49.153 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:49.153 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:49.153 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:49.153 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:49.413 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:49.413 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:49.673 09:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:49.673 09:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:49.673 09:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:49.673 09:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:49.673 09:51:43 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:49.673 09:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:49.673 09:51:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:51.586 09:51:45 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:51.586 00:38:51.586 real 0m51.334s 00:38:51.586 user 1m4.862s 00:38:51.586 sys 0m18.074s 00:38:51.586 09:51:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:51.586 09:51:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:51.586 ************************************ 00:38:51.586 END TEST nvmf_abort_qd_sizes 00:38:51.586 ************************************ 00:38:51.848 09:51:45 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:51.848 09:51:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:51.848 09:51:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:51.848 09:51:45 -- common/autotest_common.sh@10 -- # set +x 00:38:51.848 ************************************ 00:38:51.848 START TEST keyring_file 00:38:51.848 ************************************ 00:38:51.848 09:51:45 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:51.848 * Looking for test storage... 00:38:51.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:51.848 09:51:45 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:51.848 09:51:45 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:51.848 09:51:45 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:51.848 09:51:45 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:51.848 09:51:45 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:51.848 09:51:45 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:51.848 09:51:45 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:51.848 09:51:45 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:51.848 09:51:45 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:51.848 09:51:45 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:51.848 09:51:45 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:51.849 09:51:45 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:51.849 09:51:45 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:51.849 09:51:45 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:51.849 09:51:45 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:51.849 09:51:45 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:51.849 09:51:45 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:51.849 09:51:45 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:51.849 09:51:45 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@47 -- # : 0 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:51.849 09:51:45 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:51.849 09:51:45 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:51.849 09:51:45 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:51.849 09:51:45 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:51.849 09:51:45 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:51.849 09:51:45 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:51.849 09:51:45 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:51.849 09:51:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:51.849 09:51:45 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:51.849 09:51:45 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:51.849 09:51:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:51.849 09:51:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:51.849 09:51:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.g36PefsoLC 00:38:51.849 09:51:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:51.849 09:51:45 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:51.849 09:51:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.g36PefsoLC 00:38:51.849 09:51:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.g36PefsoLC 00:38:52.111 09:51:45 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.g36PefsoLC 00:38:52.111 09:51:45 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:52.111 09:51:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:52.111 09:51:45 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:52.111 09:51:45 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:52.111 09:51:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:52.111 09:51:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:52.111 09:51:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gnqTS0Tr53 00:38:52.111 09:51:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:52.111 09:51:45 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:52.111 09:51:45 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:52.111 09:51:45 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:52.111 09:51:45 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:38:52.111 09:51:45 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:52.111 09:51:45 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:52.111 09:51:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gnqTS0Tr53 00:38:52.111 09:51:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gnqTS0Tr53 00:38:52.111 09:51:45 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.gnqTS0Tr53 00:38:52.111 09:51:45 keyring_file -- keyring/file.sh@30 -- # tgtpid=585220 00:38:52.111 09:51:45 keyring_file -- keyring/file.sh@32 -- # waitforlisten 585220 00:38:52.111 09:51:45 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:52.111 09:51:45 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 585220 ']' 00:38:52.111 09:51:45 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:52.111 09:51:45 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:52.112 09:51:45 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:52.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:52.112 09:51:45 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:52.112 09:51:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:52.112 [2024-05-16 09:51:45.532581] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:38:52.112 [2024-05-16 09:51:45.532656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid585220 ] 00:38:52.112 EAL: No free 2048 kB hugepages reported on node 1 00:38:52.112 [2024-05-16 09:51:45.597743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.372 [2024-05-16 09:51:45.672088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:38:52.943 09:51:46 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:52.943 [2024-05-16 09:51:46.319170] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:52.943 null0 00:38:52.943 [2024-05-16 09:51:46.351203] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:38:52.943 [2024-05-16 09:51:46.351250] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:52.943 [2024-05-16 09:51:46.351456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:52.943 [2024-05-16 09:51:46.359237] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:52.943 09:51:46 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:52.943 [2024-05-16 09:51:46.375281] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:52.943 request: 00:38:52.943 { 00:38:52.943 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:52.943 "secure_channel": false, 00:38:52.943 "listen_address": { 00:38:52.943 "trtype": "tcp", 00:38:52.943 "traddr": "127.0.0.1", 00:38:52.943 "trsvcid": "4420" 00:38:52.943 }, 00:38:52.943 "method": "nvmf_subsystem_add_listener", 00:38:52.943 "req_id": 1 00:38:52.943 } 00:38:52.943 Got JSON-RPC error response 00:38:52.943 response: 00:38:52.943 { 00:38:52.943 "code": -32602, 00:38:52.943 "message": "Invalid parameters" 00:38:52.943 } 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:52.943 09:51:46 keyring_file -- keyring/file.sh@46 -- # bperfpid=585361 00:38:52.943 09:51:46 keyring_file -- keyring/file.sh@48 -- # waitforlisten 585361 /var/tmp/bperf.sock 00:38:52.943 09:51:46 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 585361 ']' 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:52.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:52.943 09:51:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:52.943 [2024-05-16 09:51:46.431188] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:38:52.943 [2024-05-16 09:51:46.431234] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid585361 ] 00:38:52.943 EAL: No free 2048 kB hugepages reported on node 1 00:38:52.943 [2024-05-16 09:51:46.482283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:53.204 [2024-05-16 09:51:46.535510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:53.775 09:51:47 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:53.775 09:51:47 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:38:53.775 09:51:47 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.g36PefsoLC 00:38:53.775 09:51:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.g36PefsoLC 00:38:54.036 09:51:47 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.gnqTS0Tr53 00:38:54.036 09:51:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.gnqTS0Tr53 00:38:54.036 09:51:47 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:38:54.036 09:51:47 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:38:54.036 09:51:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:54.036 09:51:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:54.036 09:51:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:54.297 09:51:47 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.g36PefsoLC == \/\t\m\p\/\t\m\p\.\g\3\6\P\e\f\s\o\L\C ]] 00:38:54.297 09:51:47 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:38:54.297 09:51:47 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:54.297 09:51:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:54.297 09:51:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:54.297 09:51:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:54.297 09:51:47 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.gnqTS0Tr53 == \/\t\m\p\/\t\m\p\.\g\n\q\T\S\0\T\r\5\3 ]] 00:38:54.297 09:51:47 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:38:54.297 09:51:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:54.297 09:51:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:54.297 09:51:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:54.297 09:51:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:54.297 09:51:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:54.557 09:51:47 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:38:54.557 09:51:47 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:38:54.557 09:51:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:54.557 09:51:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:54.557 09:51:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:54.557 09:51:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:54.557 09:51:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:54.819 09:51:48 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:54.819 09:51:48 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:54.819 09:51:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:54.819 [2024-05-16 09:51:48.292013] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:54.819 nvme0n1 00:38:55.084 09:51:48 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:38:55.084 09:51:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:55.084 09:51:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:55.084 09:51:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:55.084 09:51:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:55.084 09:51:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:55.084 09:51:48 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:38:55.084 09:51:48 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:38:55.084 09:51:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:55.084 09:51:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:55.084 09:51:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:55.084 09:51:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:55.084 09:51:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:55.347 09:51:48 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:38:55.347 09:51:48 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:55.347 Running I/O for 1 seconds... 00:38:56.291 00:38:56.291 Latency(us) 00:38:56.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:56.291 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:56.291 nvme0n1 : 1.01 17553.28 68.57 0.00 0.00 7261.83 3386.03 9994.24 00:38:56.291 =================================================================================================================== 00:38:56.292 Total : 17553.28 68.57 0.00 0.00 7261.83 3386.03 9994.24 00:38:56.292 0 00:38:56.292 09:51:49 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:56.292 09:51:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:56.552 09:51:49 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:38:56.552 09:51:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:56.552 09:51:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:56.552 09:51:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:56.552 09:51:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:56.552 09:51:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:56.814 09:51:50 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:38:56.814 09:51:50 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:38:56.814 09:51:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:56.814 09:51:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:56.814 09:51:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:56.814 09:51:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:56.814 09:51:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:56.814 09:51:50 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:56.814 09:51:50 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:56.814 09:51:50 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:38:56.814 09:51:50 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:56.814 09:51:50 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:38:56.814 09:51:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:56.814 09:51:50 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:38:56.814 09:51:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:56.814 09:51:50 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:56.814 09:51:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:57.075 [2024-05-16 09:51:50.473734] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:57.075 [2024-05-16 09:51:50.474477] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23913b0 (107): Transport endpoint is not connected 00:38:57.075 [2024-05-16 09:51:50.475472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23913b0 (9): Bad file descriptor 00:38:57.075 [2024-05-16 09:51:50.476473] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:57.075 [2024-05-16 09:51:50.476482] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:57.075 [2024-05-16 09:51:50.476487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:57.075 request: 00:38:57.075 { 00:38:57.075 "name": "nvme0", 00:38:57.075 "trtype": "tcp", 00:38:57.075 "traddr": "127.0.0.1", 00:38:57.075 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:57.075 "adrfam": "ipv4", 00:38:57.075 "trsvcid": "4420", 00:38:57.075 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:57.075 "psk": "key1", 00:38:57.075 "method": "bdev_nvme_attach_controller", 00:38:57.075 "req_id": 1 00:38:57.075 } 00:38:57.075 Got JSON-RPC error response 00:38:57.075 response: 00:38:57.075 { 00:38:57.075 "code": -32602, 00:38:57.075 "message": "Invalid parameters" 00:38:57.075 } 00:38:57.075 09:51:50 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:38:57.075 09:51:50 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:57.075 09:51:50 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:57.075 09:51:50 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:57.075 09:51:50 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:38:57.075 09:51:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:57.075 09:51:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:57.075 09:51:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:57.075 09:51:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:57.075 09:51:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:57.336 09:51:50 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:38:57.336 09:51:50 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:38:57.336 09:51:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:57.336 09:51:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:57.336 09:51:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:57.336 09:51:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:57.336 09:51:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:57.336 09:51:50 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:57.336 09:51:50 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:38:57.336 09:51:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:57.597 09:51:50 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:38:57.597 09:51:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:57.597 09:51:51 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:38:57.597 09:51:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:57.597 09:51:51 keyring_file -- keyring/file.sh@77 -- # jq length 00:38:57.857 09:51:51 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:38:57.857 09:51:51 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.g36PefsoLC 00:38:57.857 09:51:51 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.g36PefsoLC 00:38:57.858 09:51:51 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:38:57.858 09:51:51 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.g36PefsoLC 00:38:57.858 09:51:51 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:38:57.858 09:51:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:57.858 09:51:51 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:38:57.858 09:51:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:57.858 09:51:51 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.g36PefsoLC 00:38:57.858 09:51:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.g36PefsoLC 00:38:58.118 [2024-05-16 09:51:51.449334] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.g36PefsoLC': 0100660 00:38:58.118 [2024-05-16 09:51:51.449350] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:58.118 request: 00:38:58.118 { 00:38:58.118 "name": "key0", 00:38:58.119 "path": "/tmp/tmp.g36PefsoLC", 00:38:58.119 "method": "keyring_file_add_key", 00:38:58.119 "req_id": 1 00:38:58.119 } 00:38:58.119 Got JSON-RPC error response 00:38:58.119 response: 00:38:58.119 { 00:38:58.119 "code": -1, 00:38:58.119 "message": "Operation not permitted" 00:38:58.119 } 00:38:58.119 09:51:51 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:38:58.119 09:51:51 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:58.119 09:51:51 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:58.119 09:51:51 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:58.119 09:51:51 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.g36PefsoLC 00:38:58.119 09:51:51 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.g36PefsoLC 00:38:58.119 09:51:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.g36PefsoLC 00:38:58.119 09:51:51 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.g36PefsoLC 00:38:58.119 09:51:51 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:38:58.119 09:51:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:58.119 09:51:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:58.119 09:51:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:58.119 09:51:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:58.119 09:51:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:58.380 09:51:51 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:38:58.380 09:51:51 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:58.380 09:51:51 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:38:58.380 09:51:51 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:58.380 09:51:51 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:38:58.380 09:51:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:58.380 09:51:51 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:38:58.380 09:51:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:58.380 09:51:51 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:58.380 09:51:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:58.380 [2024-05-16 09:51:51.922522] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.g36PefsoLC': No such file or directory 00:38:58.380 [2024-05-16 09:51:51.922533] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:58.380 [2024-05-16 09:51:51.922549] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:58.380 [2024-05-16 09:51:51.922553] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:58.380 [2024-05-16 09:51:51.922557] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:58.380 request: 00:38:58.380 { 00:38:58.380 "name": "nvme0", 00:38:58.380 "trtype": "tcp", 00:38:58.380 "traddr": "127.0.0.1", 00:38:58.380 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:58.380 "adrfam": "ipv4", 00:38:58.380 "trsvcid": "4420", 00:38:58.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:58.380 "psk": "key0", 00:38:58.380 "method": "bdev_nvme_attach_controller", 00:38:58.380 "req_id": 1 00:38:58.380 } 00:38:58.380 Got JSON-RPC error response 00:38:58.380 response: 00:38:58.380 { 00:38:58.380 "code": -19, 00:38:58.380 "message": "No such device" 00:38:58.380 } 00:38:58.380 09:51:51 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:38:58.380 09:51:51 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:58.380 09:51:51 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:58.380 09:51:51 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:58.380 09:51:51 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:38:58.380 09:51:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:58.642 09:51:52 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:58.642 09:51:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:58.642 09:51:52 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:58.642 09:51:52 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:58.642 09:51:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:58.642 09:51:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:58.642 09:51:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JDGn4ps76E 00:38:58.642 09:51:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:58.642 09:51:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:58.642 09:51:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:58.642 09:51:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:58.642 09:51:52 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:38:58.642 09:51:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:58.642 09:51:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:58.642 09:51:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JDGn4ps76E 00:38:58.642 09:51:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JDGn4ps76E 00:38:58.642 09:51:52 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.JDGn4ps76E 00:38:58.642 09:51:52 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JDGn4ps76E 00:38:58.642 09:51:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JDGn4ps76E 00:38:58.904 09:51:52 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:58.904 09:51:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:59.164 nvme0n1 00:38:59.164 09:51:52 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:38:59.164 09:51:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:59.164 09:51:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:59.164 09:51:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:59.164 09:51:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:59.164 09:51:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:59.424 09:51:52 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:38:59.424 09:51:52 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:38:59.424 09:51:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:59.424 09:51:52 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:38:59.424 09:51:52 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:38:59.424 09:51:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:59.424 09:51:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:59.424 09:51:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:59.684 09:51:53 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:38:59.684 09:51:53 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:38:59.684 09:51:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:59.684 09:51:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:59.684 09:51:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:59.684 09:51:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:59.684 09:51:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:59.684 09:51:53 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:38:59.684 09:51:53 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:59.684 09:51:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:59.944 09:51:53 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:38:59.944 09:51:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:59.944 09:51:53 keyring_file -- keyring/file.sh@104 -- # jq length 00:39:00.204 09:51:53 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:39:00.204 09:51:53 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JDGn4ps76E 00:39:00.204 09:51:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JDGn4ps76E 00:39:00.204 09:51:53 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.gnqTS0Tr53 00:39:00.204 09:51:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.gnqTS0Tr53 00:39:00.464 09:51:53 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:00.464 09:51:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:00.724 nvme0n1 00:39:00.724 09:51:54 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:39:00.724 09:51:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:00.985 09:51:54 keyring_file -- keyring/file.sh@112 -- # config='{ 00:39:00.985 "subsystems": [ 00:39:00.985 { 00:39:00.985 "subsystem": "keyring", 00:39:00.985 "config": [ 00:39:00.985 { 00:39:00.985 "method": "keyring_file_add_key", 00:39:00.985 "params": { 00:39:00.985 "name": "key0", 00:39:00.985 "path": "/tmp/tmp.JDGn4ps76E" 00:39:00.985 } 00:39:00.985 }, 00:39:00.985 { 00:39:00.985 "method": "keyring_file_add_key", 00:39:00.985 "params": { 00:39:00.985 "name": "key1", 00:39:00.985 "path": "/tmp/tmp.gnqTS0Tr53" 00:39:00.985 } 00:39:00.985 } 00:39:00.985 ] 00:39:00.985 }, 00:39:00.985 { 00:39:00.985 "subsystem": "iobuf", 00:39:00.985 "config": [ 00:39:00.985 { 00:39:00.985 "method": "iobuf_set_options", 00:39:00.985 "params": { 00:39:00.985 "small_pool_count": 8192, 00:39:00.985 "large_pool_count": 1024, 00:39:00.985 "small_bufsize": 8192, 00:39:00.985 "large_bufsize": 135168 00:39:00.985 } 00:39:00.985 } 00:39:00.985 ] 00:39:00.985 }, 00:39:00.985 { 00:39:00.985 "subsystem": "sock", 00:39:00.985 "config": [ 00:39:00.985 { 00:39:00.985 "method": "sock_impl_set_options", 00:39:00.985 "params": { 00:39:00.985 "impl_name": "posix", 00:39:00.985 "recv_buf_size": 2097152, 00:39:00.985 "send_buf_size": 2097152, 00:39:00.985 "enable_recv_pipe": true, 00:39:00.985 "enable_quickack": false, 00:39:00.985 "enable_placement_id": 0, 00:39:00.985 "enable_zerocopy_send_server": true, 00:39:00.985 "enable_zerocopy_send_client": false, 00:39:00.985 "zerocopy_threshold": 0, 00:39:00.985 "tls_version": 0, 00:39:00.985 "enable_ktls": false 00:39:00.985 } 00:39:00.985 }, 00:39:00.985 { 00:39:00.985 "method": "sock_impl_set_options", 00:39:00.986 "params": { 00:39:00.986 "impl_name": "ssl", 00:39:00.986 "recv_buf_size": 4096, 00:39:00.986 "send_buf_size": 4096, 00:39:00.986 "enable_recv_pipe": true, 00:39:00.986 "enable_quickack": false, 00:39:00.986 "enable_placement_id": 0, 00:39:00.986 "enable_zerocopy_send_server": true, 00:39:00.986 "enable_zerocopy_send_client": false, 00:39:00.986 "zerocopy_threshold": 0, 00:39:00.986 "tls_version": 0, 00:39:00.986 "enable_ktls": false 00:39:00.986 } 00:39:00.986 } 00:39:00.986 ] 00:39:00.986 }, 00:39:00.986 { 00:39:00.986 "subsystem": "vmd", 00:39:00.986 "config": [] 00:39:00.986 }, 00:39:00.986 { 00:39:00.986 "subsystem": "accel", 00:39:00.986 "config": [ 00:39:00.986 { 00:39:00.986 "method": "accel_set_options", 00:39:00.986 "params": { 00:39:00.986 "small_cache_size": 128, 00:39:00.986 "large_cache_size": 16, 00:39:00.986 "task_count": 2048, 00:39:00.986 "sequence_count": 2048, 00:39:00.986 "buf_count": 2048 00:39:00.986 } 00:39:00.986 } 00:39:00.986 ] 00:39:00.986 }, 00:39:00.986 { 00:39:00.986 "subsystem": "bdev", 00:39:00.986 "config": [ 00:39:00.986 { 00:39:00.986 "method": "bdev_set_options", 00:39:00.986 "params": { 00:39:00.986 "bdev_io_pool_size": 65535, 00:39:00.986 "bdev_io_cache_size": 256, 00:39:00.986 "bdev_auto_examine": true, 00:39:00.986 "iobuf_small_cache_size": 128, 00:39:00.986 "iobuf_large_cache_size": 16 00:39:00.986 } 00:39:00.986 }, 00:39:00.986 { 00:39:00.986 "method": "bdev_raid_set_options", 00:39:00.986 "params": { 00:39:00.986 "process_window_size_kb": 1024 00:39:00.986 } 00:39:00.986 }, 00:39:00.986 { 00:39:00.986 "method": "bdev_iscsi_set_options", 00:39:00.986 "params": { 00:39:00.986 "timeout_sec": 30 00:39:00.986 } 00:39:00.986 }, 00:39:00.986 { 00:39:00.986 "method": "bdev_nvme_set_options", 00:39:00.986 "params": { 00:39:00.986 "action_on_timeout": "none", 00:39:00.986 "timeout_us": 0, 00:39:00.986 "timeout_admin_us": 0, 00:39:00.986 "keep_alive_timeout_ms": 10000, 00:39:00.986 "arbitration_burst": 0, 00:39:00.986 "low_priority_weight": 0, 00:39:00.986 "medium_priority_weight": 0, 00:39:00.986 "high_priority_weight": 0, 00:39:00.986 "nvme_adminq_poll_period_us": 10000, 00:39:00.986 "nvme_ioq_poll_period_us": 0, 00:39:00.986 "io_queue_requests": 512, 00:39:00.986 "delay_cmd_submit": true, 00:39:00.986 "transport_retry_count": 4, 00:39:00.986 "bdev_retry_count": 3, 00:39:00.986 "transport_ack_timeout": 0, 00:39:00.986 "ctrlr_loss_timeout_sec": 0, 00:39:00.986 "reconnect_delay_sec": 0, 00:39:00.986 "fast_io_fail_timeout_sec": 0, 00:39:00.986 "disable_auto_failback": false, 00:39:00.986 "generate_uuids": false, 00:39:00.986 "transport_tos": 0, 00:39:00.986 "nvme_error_stat": false, 00:39:00.986 "rdma_srq_size": 0, 00:39:00.986 "io_path_stat": false, 00:39:00.986 "allow_accel_sequence": false, 00:39:00.986 "rdma_max_cq_size": 0, 00:39:00.986 "rdma_cm_event_timeout_ms": 0, 00:39:00.986 "dhchap_digests": [ 00:39:00.986 "sha256", 00:39:00.986 "sha384", 00:39:00.986 "sha512" 00:39:00.986 ], 00:39:00.986 "dhchap_dhgroups": [ 00:39:00.986 "null", 00:39:00.986 "ffdhe2048", 00:39:00.986 "ffdhe3072", 00:39:00.986 "ffdhe4096", 00:39:00.986 "ffdhe6144", 00:39:00.986 "ffdhe8192" 00:39:00.986 ] 00:39:00.986 } 00:39:00.986 }, 00:39:00.986 { 00:39:00.986 "method": "bdev_nvme_attach_controller", 00:39:00.986 "params": { 00:39:00.986 "name": "nvme0", 00:39:00.986 "trtype": "TCP", 00:39:00.986 "adrfam": "IPv4", 00:39:00.986 "traddr": "127.0.0.1", 00:39:00.986 "trsvcid": "4420", 00:39:00.986 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:00.986 "prchk_reftag": false, 00:39:00.986 "prchk_guard": false, 00:39:00.986 "ctrlr_loss_timeout_sec": 0, 00:39:00.986 "reconnect_delay_sec": 0, 00:39:00.986 "fast_io_fail_timeout_sec": 0, 00:39:00.986 "psk": "key0", 00:39:00.986 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:00.986 "hdgst": false, 00:39:00.986 "ddgst": false 00:39:00.986 } 00:39:00.986 }, 00:39:00.986 { 00:39:00.986 "method": "bdev_nvme_set_hotplug", 00:39:00.986 "params": { 00:39:00.986 "period_us": 100000, 00:39:00.986 "enable": false 00:39:00.986 } 00:39:00.986 }, 00:39:00.986 { 00:39:00.986 "method": "bdev_wait_for_examine" 00:39:00.986 } 00:39:00.986 ] 00:39:00.986 }, 00:39:00.986 { 00:39:00.986 "subsystem": "nbd", 00:39:00.986 "config": [] 00:39:00.986 } 00:39:00.986 ] 00:39:00.986 }' 00:39:00.986 09:51:54 keyring_file -- keyring/file.sh@114 -- # killprocess 585361 00:39:00.986 09:51:54 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 585361 ']' 00:39:00.986 09:51:54 keyring_file -- common/autotest_common.sh@950 -- # kill -0 585361 00:39:00.986 09:51:54 keyring_file -- common/autotest_common.sh@951 -- # uname 00:39:00.986 09:51:54 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:00.986 09:51:54 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 585361 00:39:00.986 09:51:54 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:39:00.986 09:51:54 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:39:00.986 09:51:54 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 585361' 00:39:00.986 killing process with pid 585361 00:39:00.986 09:51:54 keyring_file -- common/autotest_common.sh@965 -- # kill 585361 00:39:00.986 Received shutdown signal, test time was about 1.000000 seconds 00:39:00.986 00:39:00.986 Latency(us) 00:39:00.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:00.986 =================================================================================================================== 00:39:00.986 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:00.986 09:51:54 keyring_file -- common/autotest_common.sh@970 -- # wait 585361 00:39:00.986 09:51:54 keyring_file -- keyring/file.sh@117 -- # bperfpid=587114 00:39:00.986 09:51:54 keyring_file -- keyring/file.sh@119 -- # waitforlisten 587114 /var/tmp/bperf.sock 00:39:00.986 09:51:54 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 587114 ']' 00:39:00.986 09:51:54 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:00.986 09:51:54 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:00.986 09:51:54 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:00.986 09:51:54 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:00.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:00.986 09:51:54 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:00.986 09:51:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:00.986 09:51:54 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:39:00.986 "subsystems": [ 00:39:00.986 { 00:39:00.986 "subsystem": "keyring", 00:39:00.986 "config": [ 00:39:00.986 { 00:39:00.986 "method": "keyring_file_add_key", 00:39:00.986 "params": { 00:39:00.986 "name": "key0", 00:39:00.986 "path": "/tmp/tmp.JDGn4ps76E" 00:39:00.986 } 00:39:00.986 }, 00:39:00.986 { 00:39:00.986 "method": "keyring_file_add_key", 00:39:00.986 "params": { 00:39:00.986 "name": "key1", 00:39:00.986 "path": "/tmp/tmp.gnqTS0Tr53" 00:39:00.986 } 00:39:00.986 } 00:39:00.986 ] 00:39:00.986 }, 00:39:00.986 { 00:39:00.986 "subsystem": "iobuf", 00:39:00.986 "config": [ 00:39:00.986 { 00:39:00.986 "method": "iobuf_set_options", 00:39:00.986 "params": { 00:39:00.986 "small_pool_count": 8192, 00:39:00.986 "large_pool_count": 1024, 00:39:00.986 "small_bufsize": 8192, 00:39:00.986 "large_bufsize": 135168 00:39:00.986 } 00:39:00.986 } 00:39:00.986 ] 00:39:00.986 }, 00:39:00.986 { 00:39:00.986 "subsystem": "sock", 00:39:00.986 "config": [ 00:39:00.986 { 00:39:00.986 "method": "sock_impl_set_options", 00:39:00.986 "params": { 00:39:00.986 "impl_name": "posix", 00:39:00.986 "recv_buf_size": 2097152, 00:39:00.986 "send_buf_size": 2097152, 00:39:00.986 "enable_recv_pipe": true, 00:39:00.986 "enable_quickack": false, 00:39:00.986 "enable_placement_id": 0, 00:39:00.986 "enable_zerocopy_send_server": true, 00:39:00.986 "enable_zerocopy_send_client": false, 00:39:00.986 "zerocopy_threshold": 0, 00:39:00.986 "tls_version": 0, 00:39:00.986 "enable_ktls": false 00:39:00.986 } 00:39:00.986 }, 00:39:00.986 { 00:39:00.987 "method": "sock_impl_set_options", 00:39:00.987 "params": { 00:39:00.987 "impl_name": "ssl", 00:39:00.987 "recv_buf_size": 4096, 00:39:00.987 "send_buf_size": 4096, 00:39:00.987 "enable_recv_pipe": true, 00:39:00.987 "enable_quickack": false, 00:39:00.987 "enable_placement_id": 0, 00:39:00.987 "enable_zerocopy_send_server": true, 00:39:00.987 "enable_zerocopy_send_client": false, 00:39:00.987 "zerocopy_threshold": 0, 00:39:00.987 "tls_version": 0, 00:39:00.987 "enable_ktls": false 00:39:00.987 } 00:39:00.987 } 00:39:00.987 ] 00:39:00.987 }, 00:39:00.987 { 00:39:00.987 "subsystem": "vmd", 00:39:00.987 "config": [] 00:39:00.987 }, 00:39:00.987 { 00:39:00.987 "subsystem": "accel", 00:39:00.987 "config": [ 00:39:00.987 { 00:39:00.987 "method": "accel_set_options", 00:39:00.987 "params": { 00:39:00.987 "small_cache_size": 128, 00:39:00.987 "large_cache_size": 16, 00:39:00.987 "task_count": 2048, 00:39:00.987 "sequence_count": 2048, 00:39:00.987 "buf_count": 2048 00:39:00.987 } 00:39:00.987 } 00:39:00.987 ] 00:39:00.987 }, 00:39:00.987 { 00:39:00.987 "subsystem": "bdev", 00:39:00.987 "config": [ 00:39:00.987 { 00:39:00.987 "method": "bdev_set_options", 00:39:00.987 "params": { 00:39:00.987 "bdev_io_pool_size": 65535, 00:39:00.987 "bdev_io_cache_size": 256, 00:39:00.987 "bdev_auto_examine": true, 00:39:00.987 "iobuf_small_cache_size": 128, 00:39:00.987 "iobuf_large_cache_size": 16 00:39:00.987 } 00:39:00.987 }, 00:39:00.987 { 00:39:00.987 "method": "bdev_raid_set_options", 00:39:00.987 "params": { 00:39:00.987 "process_window_size_kb": 1024 00:39:00.987 } 00:39:00.987 }, 00:39:00.987 { 00:39:00.987 "method": "bdev_iscsi_set_options", 00:39:00.987 "params": { 00:39:00.987 "timeout_sec": 30 00:39:00.987 } 00:39:00.987 }, 00:39:00.987 { 00:39:00.987 "method": "bdev_nvme_set_options", 00:39:00.987 "params": { 00:39:00.987 "action_on_timeout": "none", 00:39:00.987 "timeout_us": 0, 00:39:00.987 "timeout_admin_us": 0, 00:39:00.987 "keep_alive_timeout_ms": 10000, 00:39:00.987 "arbitration_burst": 0, 00:39:00.987 "low_priority_weight": 0, 00:39:00.987 "medium_priority_weight": 0, 00:39:00.987 "high_priority_weight": 0, 00:39:00.987 "nvme_adminq_poll_period_us": 10000, 00:39:00.987 "nvme_ioq_poll_period_us": 0, 00:39:00.987 "io_queue_requests": 512, 00:39:00.987 "delay_cmd_submit": true, 00:39:00.987 "transport_retry_count": 4, 00:39:00.987 "bdev_retry_count": 3, 00:39:00.987 "transport_ack_timeout": 0, 00:39:00.987 "ctrlr_loss_timeout_sec": 0, 00:39:00.987 "reconnect_delay_sec": 0, 00:39:00.987 "fast_io_fail_timeout_sec": 0, 00:39:00.987 "disable_auto_failback": false, 00:39:00.987 "generate_uuids": false, 00:39:00.987 "transport_tos": 0, 00:39:00.987 "nvme_error_stat": false, 00:39:00.987 "rdma_srq_size": 0, 00:39:00.987 "io_path_stat": false, 00:39:00.987 "allow_accel_sequence": false, 00:39:00.987 "rdma_max_cq_size": 0, 00:39:00.987 "rdma_cm_event_timeout_ms": 0, 00:39:00.987 "dhchap_digests": [ 00:39:00.987 "sha256", 00:39:00.987 "sha384", 00:39:00.987 "sha512" 00:39:00.987 ], 00:39:00.987 "dhchap_dhgroups": [ 00:39:00.987 "null", 00:39:00.987 "ffdhe2048", 00:39:00.987 "ffdhe3072", 00:39:00.987 "ffdhe4096", 00:39:00.987 "ffdhe6144", 00:39:00.987 "ffdhe8192" 00:39:00.987 ] 00:39:00.987 } 00:39:00.987 }, 00:39:00.987 { 00:39:00.987 "method": "bdev_nvme_attach_controller", 00:39:00.987 "params": { 00:39:00.987 "name": "nvme0", 00:39:00.987 "trtype": "TCP", 00:39:00.987 "adrfam": "IPv4", 00:39:00.987 "traddr": "127.0.0.1", 00:39:00.987 "trsvcid": "4420", 00:39:00.987 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:00.987 "prchk_reftag": false, 00:39:00.987 "prchk_guard": false, 00:39:00.987 "ctrlr_loss_timeout_sec": 0, 00:39:00.987 "reconnect_delay_sec": 0, 00:39:00.987 "fast_io_fail_timeout_sec": 0, 00:39:00.987 "psk": "key0", 00:39:00.987 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:00.987 "hdgst": false, 00:39:00.987 "ddgst": false 00:39:00.987 } 00:39:00.987 }, 00:39:00.987 { 00:39:00.987 "method": "bdev_nvme_set_hotplug", 00:39:00.987 "params": { 00:39:00.987 "period_us": 100000, 00:39:00.987 "enable": false 00:39:00.987 } 00:39:00.987 }, 00:39:00.987 { 00:39:00.987 "method": "bdev_wait_for_examine" 00:39:00.987 } 00:39:00.987 ] 00:39:00.987 }, 00:39:00.987 { 00:39:00.987 "subsystem": "nbd", 00:39:00.987 "config": [] 00:39:00.987 } 00:39:00.987 ] 00:39:00.987 }' 00:39:00.987 [2024-05-16 09:51:54.509809] Starting SPDK v24.05-pre git sha1 cc94f3031 / DPDK 24.03.0 initialization... 00:39:00.987 [2024-05-16 09:51:54.509864] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid587114 ] 00:39:00.987 EAL: No free 2048 kB hugepages reported on node 1 00:39:01.247 [2024-05-16 09:51:54.581952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.247 [2024-05-16 09:51:54.634830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:01.247 [2024-05-16 09:51:54.768529] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:01.818 09:51:55 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:01.818 09:51:55 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:39:01.818 09:51:55 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:39:01.818 09:51:55 keyring_file -- keyring/file.sh@120 -- # jq length 00:39:01.818 09:51:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:02.078 09:51:55 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:39:02.078 09:51:55 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:39:02.078 09:51:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:02.078 09:51:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:02.078 09:51:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:02.078 09:51:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:02.078 09:51:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:02.078 09:51:55 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:02.078 09:51:55 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:39:02.078 09:51:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:02.078 09:51:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:02.078 09:51:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:02.078 09:51:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:02.078 09:51:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:02.339 09:51:55 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:39:02.339 09:51:55 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:39:02.339 09:51:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:02.339 09:51:55 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:39:02.339 09:51:55 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:39:02.339 09:51:55 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:02.339 09:51:55 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.JDGn4ps76E /tmp/tmp.gnqTS0Tr53 00:39:02.339 09:51:55 keyring_file -- keyring/file.sh@20 -- # killprocess 587114 00:39:02.339 09:51:55 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 587114 ']' 00:39:02.339 09:51:55 keyring_file -- common/autotest_common.sh@950 -- # kill -0 587114 00:39:02.599 09:51:55 keyring_file -- common/autotest_common.sh@951 -- # uname 00:39:02.599 09:51:55 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:02.599 09:51:55 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 587114 00:39:02.599 09:51:55 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:39:02.599 09:51:55 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:39:02.599 09:51:55 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 587114' 00:39:02.599 killing process with pid 587114 00:39:02.600 09:51:55 keyring_file -- common/autotest_common.sh@965 -- # kill 587114 00:39:02.600 Received shutdown signal, test time was about 1.000000 seconds 00:39:02.600 00:39:02.600 Latency(us) 00:39:02.600 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:02.600 =================================================================================================================== 00:39:02.600 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:02.600 09:51:55 keyring_file -- common/autotest_common.sh@970 -- # wait 587114 00:39:02.600 09:51:56 keyring_file -- keyring/file.sh@21 -- # killprocess 585220 00:39:02.600 09:51:56 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 585220 ']' 00:39:02.600 09:51:56 keyring_file -- common/autotest_common.sh@950 -- # kill -0 585220 00:39:02.600 09:51:56 keyring_file -- common/autotest_common.sh@951 -- # uname 00:39:02.600 09:51:56 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:02.600 09:51:56 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 585220 00:39:02.600 09:51:56 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:39:02.600 09:51:56 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:39:02.600 09:51:56 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 585220' 00:39:02.600 killing process with pid 585220 00:39:02.600 09:51:56 keyring_file -- common/autotest_common.sh@965 -- # kill 585220 00:39:02.600 [2024-05-16 09:51:56.119478] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:39:02.600 [2024-05-16 09:51:56.119517] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:39:02.600 09:51:56 keyring_file -- common/autotest_common.sh@970 -- # wait 585220 00:39:02.860 00:39:02.860 real 0m11.121s 00:39:02.860 user 0m26.694s 00:39:02.860 sys 0m2.531s 00:39:02.860 09:51:56 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:02.860 09:51:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:02.860 ************************************ 00:39:02.860 END TEST keyring_file 00:39:02.860 ************************************ 00:39:02.860 09:51:56 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:39:02.860 09:51:56 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:39:02.861 09:51:56 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:39:02.861 09:51:56 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:39:02.861 09:51:56 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:39:02.861 09:51:56 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:39:02.861 09:51:56 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:39:02.861 09:51:56 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:39:02.861 09:51:56 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:39:02.861 09:51:56 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:39:02.861 09:51:56 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:39:02.861 09:51:56 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:39:02.861 09:51:56 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:39:02.861 09:51:56 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:39:02.861 09:51:56 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:39:02.861 09:51:56 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:39:02.861 09:51:56 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:39:02.861 09:51:56 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:39:02.861 09:51:56 -- common/autotest_common.sh@720 -- # xtrace_disable 00:39:02.861 09:51:56 -- common/autotest_common.sh@10 -- # set +x 00:39:02.861 09:51:56 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:39:02.861 09:51:56 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:39:02.861 09:51:56 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:39:02.861 09:51:56 -- common/autotest_common.sh@10 -- # set +x 00:39:10.999 INFO: APP EXITING 00:39:10.999 INFO: killing all VMs 00:39:10.999 INFO: killing vhost app 00:39:10.999 INFO: EXIT DONE 00:39:13.545 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:39:13.545 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:39:13.545 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:39:13.546 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:39:13.546 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:39:13.546 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:39:13.807 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:39:13.807 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:39:13.807 0000:65:00.0 (144d a80a): Already using the nvme driver 00:39:13.807 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:39:13.807 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:39:13.807 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:39:13.807 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:39:13.807 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:39:13.807 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:39:13.807 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:39:14.068 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:39:17.371 Cleaning 00:39:17.371 Removing: /var/run/dpdk/spdk0/config 00:39:17.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:17.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:17.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:17.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:17.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:17.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:17.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:17.371 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:17.371 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:17.371 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:17.371 Removing: /var/run/dpdk/spdk1/config 00:39:17.371 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:17.371 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:17.371 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:17.371 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:17.371 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:17.371 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:17.371 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:17.371 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:17.371 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:17.371 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:17.371 Removing: /var/run/dpdk/spdk1/mp_socket 00:39:17.371 Removing: /var/run/dpdk/spdk2/config 00:39:17.371 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:17.371 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:17.371 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:17.371 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:17.371 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:17.371 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:17.371 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:17.371 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:17.371 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:17.371 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:17.371 Removing: /var/run/dpdk/spdk3/config 00:39:17.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:17.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:17.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:17.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:17.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:17.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:17.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:17.371 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:17.371 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:17.371 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:17.371 Removing: /var/run/dpdk/spdk4/config 00:39:17.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:17.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:17.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:17.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:17.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:17.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:17.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:17.371 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:17.371 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:17.371 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:17.371 Removing: /dev/shm/bdev_svc_trace.1 00:39:17.371 Removing: /dev/shm/nvmf_trace.0 00:39:17.371 Removing: /dev/shm/spdk_tgt_trace.pid129858 00:39:17.371 Removing: /var/run/dpdk/spdk0 00:39:17.371 Removing: /var/run/dpdk/spdk1 00:39:17.371 Removing: /var/run/dpdk/spdk2 00:39:17.371 Removing: /var/run/dpdk/spdk3 00:39:17.371 Removing: /var/run/dpdk/spdk4 00:39:17.371 Removing: /var/run/dpdk/spdk_pid128384 00:39:17.371 Removing: /var/run/dpdk/spdk_pid129858 00:39:17.371 Removing: /var/run/dpdk/spdk_pid130528 00:39:17.371 Removing: /var/run/dpdk/spdk_pid131702 00:39:17.371 Removing: /var/run/dpdk/spdk_pid131801 00:39:17.371 Removing: /var/run/dpdk/spdk_pid133119 00:39:17.371 Removing: /var/run/dpdk/spdk_pid133158 00:39:17.371 Removing: /var/run/dpdk/spdk_pid133598 00:39:17.371 Removing: /var/run/dpdk/spdk_pid134507 00:39:17.371 Removing: /var/run/dpdk/spdk_pid135181 00:39:17.371 Removing: /var/run/dpdk/spdk_pid135560 00:39:17.371 Removing: /var/run/dpdk/spdk_pid135920 00:39:17.371 Removing: /var/run/dpdk/spdk_pid136208 00:39:17.371 Removing: /var/run/dpdk/spdk_pid136457 00:39:17.371 Removing: /var/run/dpdk/spdk_pid136791 00:39:17.371 Removing: /var/run/dpdk/spdk_pid137141 00:39:17.371 Removing: /var/run/dpdk/spdk_pid137522 00:39:17.371 Removing: /var/run/dpdk/spdk_pid138590 00:39:17.371 Removing: /var/run/dpdk/spdk_pid142165 00:39:17.371 Removing: /var/run/dpdk/spdk_pid142511 00:39:17.371 Removing: /var/run/dpdk/spdk_pid142818 00:39:17.371 Removing: /var/run/dpdk/spdk_pid142913 00:39:17.371 Removing: /var/run/dpdk/spdk_pid143346 00:39:17.371 Removing: /var/run/dpdk/spdk_pid143620 00:39:17.371 Removing: /var/run/dpdk/spdk_pid143996 00:39:17.371 Removing: /var/run/dpdk/spdk_pid144253 00:39:17.371 Removing: /var/run/dpdk/spdk_pid144457 00:39:17.371 Removing: /var/run/dpdk/spdk_pid144704 00:39:17.371 Removing: /var/run/dpdk/spdk_pid144915 00:39:17.371 Removing: /var/run/dpdk/spdk_pid145078 00:39:17.371 Removing: /var/run/dpdk/spdk_pid145519 00:39:17.371 Removing: /var/run/dpdk/spdk_pid145867 00:39:17.371 Removing: /var/run/dpdk/spdk_pid146256 00:39:17.371 Removing: /var/run/dpdk/spdk_pid146562 00:39:17.371 Removing: /var/run/dpdk/spdk_pid146654 00:39:17.371 Removing: /var/run/dpdk/spdk_pid146713 00:39:17.371 Removing: /var/run/dpdk/spdk_pid147071 00:39:17.371 Removing: /var/run/dpdk/spdk_pid147419 00:39:17.371 Removing: /var/run/dpdk/spdk_pid147725 00:39:17.371 Removing: /var/run/dpdk/spdk_pid147929 00:39:17.371 Removing: /var/run/dpdk/spdk_pid148163 00:39:17.371 Removing: /var/run/dpdk/spdk_pid148509 00:39:17.371 Removing: /var/run/dpdk/spdk_pid148864 00:39:17.371 Removing: /var/run/dpdk/spdk_pid149213 00:39:17.371 Removing: /var/run/dpdk/spdk_pid149446 00:39:17.371 Removing: /var/run/dpdk/spdk_pid149642 00:39:17.371 Removing: /var/run/dpdk/spdk_pid149950 00:39:17.371 Removing: /var/run/dpdk/spdk_pid150307 00:39:17.371 Removing: /var/run/dpdk/spdk_pid150656 00:39:17.371 Removing: /var/run/dpdk/spdk_pid151005 00:39:17.371 Removing: /var/run/dpdk/spdk_pid151226 00:39:17.371 Removing: /var/run/dpdk/spdk_pid151432 00:39:17.371 Removing: /var/run/dpdk/spdk_pid151761 00:39:17.371 Removing: /var/run/dpdk/spdk_pid152120 00:39:17.371 Removing: /var/run/dpdk/spdk_pid152468 00:39:17.631 Removing: /var/run/dpdk/spdk_pid152792 00:39:17.631 Removing: /var/run/dpdk/spdk_pid152892 00:39:17.631 Removing: /var/run/dpdk/spdk_pid153298 00:39:17.631 Removing: /var/run/dpdk/spdk_pid157750 00:39:17.631 Removing: /var/run/dpdk/spdk_pid211188 00:39:17.631 Removing: /var/run/dpdk/spdk_pid216222 00:39:17.631 Removing: /var/run/dpdk/spdk_pid228830 00:39:17.631 Removing: /var/run/dpdk/spdk_pid235204 00:39:17.631 Removing: /var/run/dpdk/spdk_pid240030 00:39:17.631 Removing: /var/run/dpdk/spdk_pid240904 00:39:17.631 Removing: /var/run/dpdk/spdk_pid254586 00:39:17.631 Removing: /var/run/dpdk/spdk_pid254655 00:39:17.631 Removing: /var/run/dpdk/spdk_pid255698 00:39:17.631 Removing: /var/run/dpdk/spdk_pid256750 00:39:17.631 Removing: /var/run/dpdk/spdk_pid257823 00:39:17.631 Removing: /var/run/dpdk/spdk_pid258473 00:39:17.631 Removing: /var/run/dpdk/spdk_pid258583 00:39:17.631 Removing: /var/run/dpdk/spdk_pid258842 00:39:17.631 Removing: /var/run/dpdk/spdk_pid258937 00:39:17.631 Removing: /var/run/dpdk/spdk_pid258939 00:39:17.631 Removing: /var/run/dpdk/spdk_pid259951 00:39:17.631 Removing: /var/run/dpdk/spdk_pid260977 00:39:17.631 Removing: /var/run/dpdk/spdk_pid262097 00:39:17.632 Removing: /var/run/dpdk/spdk_pid262754 00:39:17.632 Removing: /var/run/dpdk/spdk_pid262883 00:39:17.632 Removing: /var/run/dpdk/spdk_pid263123 00:39:17.632 Removing: /var/run/dpdk/spdk_pid264474 00:39:17.632 Removing: /var/run/dpdk/spdk_pid265801 00:39:17.632 Removing: /var/run/dpdk/spdk_pid276439 00:39:17.632 Removing: /var/run/dpdk/spdk_pid276898 00:39:17.632 Removing: /var/run/dpdk/spdk_pid281880 00:39:17.632 Removing: /var/run/dpdk/spdk_pid288804 00:39:17.632 Removing: /var/run/dpdk/spdk_pid291878 00:39:17.632 Removing: /var/run/dpdk/spdk_pid303969 00:39:17.632 Removing: /var/run/dpdk/spdk_pid314579 00:39:17.632 Removing: /var/run/dpdk/spdk_pid316692 00:39:17.632 Removing: /var/run/dpdk/spdk_pid317732 00:39:17.632 Removing: /var/run/dpdk/spdk_pid338576 00:39:17.632 Removing: /var/run/dpdk/spdk_pid343246 00:39:17.632 Removing: /var/run/dpdk/spdk_pid374309 00:39:17.632 Removing: /var/run/dpdk/spdk_pid379676 00:39:17.632 Removing: /var/run/dpdk/spdk_pid381549 00:39:17.632 Removing: /var/run/dpdk/spdk_pid383707 00:39:17.632 Removing: /var/run/dpdk/spdk_pid384045 00:39:17.632 Removing: /var/run/dpdk/spdk_pid384184 00:39:17.632 Removing: /var/run/dpdk/spdk_pid384405 00:39:17.632 Removing: /var/run/dpdk/spdk_pid385114 00:39:17.632 Removing: /var/run/dpdk/spdk_pid387132 00:39:17.632 Removing: /var/run/dpdk/spdk_pid388210 00:39:17.632 Removing: /var/run/dpdk/spdk_pid388680 00:39:17.632 Removing: /var/run/dpdk/spdk_pid391288 00:39:17.632 Removing: /var/run/dpdk/spdk_pid391994 00:39:17.632 Removing: /var/run/dpdk/spdk_pid392767 00:39:17.632 Removing: /var/run/dpdk/spdk_pid397753 00:39:17.632 Removing: /var/run/dpdk/spdk_pid409995 00:39:17.632 Removing: /var/run/dpdk/spdk_pid414926 00:39:17.632 Removing: /var/run/dpdk/spdk_pid422620 00:39:17.632 Removing: /var/run/dpdk/spdk_pid424156 00:39:17.632 Removing: /var/run/dpdk/spdk_pid425848 00:39:17.632 Removing: /var/run/dpdk/spdk_pid431027 00:39:17.632 Removing: /var/run/dpdk/spdk_pid435734 00:39:17.632 Removing: /var/run/dpdk/spdk_pid444630 00:39:17.632 Removing: /var/run/dpdk/spdk_pid444740 00:39:17.632 Removing: /var/run/dpdk/spdk_pid449585 00:39:17.632 Removing: /var/run/dpdk/spdk_pid449841 00:39:17.632 Removing: /var/run/dpdk/spdk_pid450179 00:39:17.632 Removing: /var/run/dpdk/spdk_pid450657 00:39:17.632 Removing: /var/run/dpdk/spdk_pid450762 00:39:17.632 Removing: /var/run/dpdk/spdk_pid455898 00:39:17.632 Removing: /var/run/dpdk/spdk_pid456720 00:39:17.632 Removing: /var/run/dpdk/spdk_pid461885 00:39:17.632 Removing: /var/run/dpdk/spdk_pid465229 00:39:17.632 Removing: /var/run/dpdk/spdk_pid471719 00:39:17.632 Removing: /var/run/dpdk/spdk_pid478574 00:39:17.632 Removing: /var/run/dpdk/spdk_pid488522 00:39:17.632 Removing: /var/run/dpdk/spdk_pid497134 00:39:17.632 Removing: /var/run/dpdk/spdk_pid497171 00:39:17.632 Removing: /var/run/dpdk/spdk_pid520076 00:39:17.632 Removing: /var/run/dpdk/spdk_pid520763 00:39:17.632 Removing: /var/run/dpdk/spdk_pid521457 00:39:17.632 Removing: /var/run/dpdk/spdk_pid522256 00:39:17.632 Removing: /var/run/dpdk/spdk_pid523291 00:39:17.632 Removing: /var/run/dpdk/spdk_pid524148 00:39:17.892 Removing: /var/run/dpdk/spdk_pid525097 00:39:17.892 Removing: /var/run/dpdk/spdk_pid526142 00:39:17.892 Removing: /var/run/dpdk/spdk_pid531184 00:39:17.892 Removing: /var/run/dpdk/spdk_pid531526 00:39:17.892 Removing: /var/run/dpdk/spdk_pid538545 00:39:17.892 Removing: /var/run/dpdk/spdk_pid538924 00:39:17.892 Removing: /var/run/dpdk/spdk_pid541462 00:39:17.892 Removing: /var/run/dpdk/spdk_pid548929 00:39:17.892 Removing: /var/run/dpdk/spdk_pid549010 00:39:17.892 Removing: /var/run/dpdk/spdk_pid554758 00:39:17.892 Removing: /var/run/dpdk/spdk_pid557180 00:39:17.892 Removing: /var/run/dpdk/spdk_pid559472 00:39:17.892 Removing: /var/run/dpdk/spdk_pid560977 00:39:17.892 Removing: /var/run/dpdk/spdk_pid563285 00:39:17.892 Removing: /var/run/dpdk/spdk_pid564702 00:39:17.892 Removing: /var/run/dpdk/spdk_pid574745 00:39:17.892 Removing: /var/run/dpdk/spdk_pid575861 00:39:17.892 Removing: /var/run/dpdk/spdk_pid576533 00:39:17.892 Removing: /var/run/dpdk/spdk_pid579468 00:39:17.892 Removing: /var/run/dpdk/spdk_pid580031 00:39:17.892 Removing: /var/run/dpdk/spdk_pid580508 00:39:17.892 Removing: /var/run/dpdk/spdk_pid585220 00:39:17.892 Removing: /var/run/dpdk/spdk_pid585361 00:39:17.892 Removing: /var/run/dpdk/spdk_pid587114 00:39:17.892 Clean 00:39:17.892 09:52:11 -- common/autotest_common.sh@1447 -- # return 0 00:39:17.892 09:52:11 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:39:17.892 09:52:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:17.892 09:52:11 -- common/autotest_common.sh@10 -- # set +x 00:39:17.892 09:52:11 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:39:17.892 09:52:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:17.892 09:52:11 -- common/autotest_common.sh@10 -- # set +x 00:39:17.892 09:52:11 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:17.892 09:52:11 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:17.892 09:52:11 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:17.892 09:52:11 -- spdk/autotest.sh@387 -- # hash lcov 00:39:17.892 09:52:11 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:39:17.892 09:52:11 -- spdk/autotest.sh@389 -- # hostname 00:39:18.152 09:52:11 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-10 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:18.152 geninfo: WARNING: invalid characters removed from testname! 00:39:44.731 09:52:35 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:44.731 09:52:38 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:46.643 09:52:39 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:48.024 09:52:41 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:49.437 09:52:42 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:51.366 09:52:44 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:52.749 09:52:46 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:52.749 09:52:46 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:52.749 09:52:46 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:39:52.749 09:52:46 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:52.749 09:52:46 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:52.749 09:52:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.749 09:52:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.749 09:52:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.749 09:52:46 -- paths/export.sh@5 -- $ export PATH 00:39:52.749 09:52:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.749 09:52:46 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:39:52.749 09:52:46 -- common/autobuild_common.sh@437 -- $ date +%s 00:39:52.749 09:52:46 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715845966.XXXXXX 00:39:52.749 09:52:46 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715845966.u2mrsN 00:39:52.749 09:52:46 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:39:52.749 09:52:46 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:39:52.749 09:52:46 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:39:52.749 09:52:46 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:39:52.749 09:52:46 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:39:52.749 09:52:46 -- common/autobuild_common.sh@453 -- $ get_config_params 00:39:52.749 09:52:46 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:39:52.749 09:52:46 -- common/autotest_common.sh@10 -- $ set +x 00:39:52.749 09:52:46 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:39:52.749 09:52:46 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:39:52.749 09:52:46 -- pm/common@17 -- $ local monitor 00:39:52.749 09:52:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:52.749 09:52:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:52.749 09:52:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:52.749 09:52:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:52.749 09:52:46 -- pm/common@21 -- $ date +%s 00:39:52.749 09:52:46 -- pm/common@25 -- $ sleep 1 00:39:52.749 09:52:46 -- pm/common@21 -- $ date +%s 00:39:52.749 09:52:46 -- pm/common@21 -- $ date +%s 00:39:52.749 09:52:46 -- pm/common@21 -- $ date +%s 00:39:52.749 09:52:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715845966 00:39:52.749 09:52:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715845966 00:39:52.749 09:52:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715845966 00:39:52.749 09:52:46 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715845966 00:39:52.749 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715845966_collect-vmstat.pm.log 00:39:52.749 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715845966_collect-cpu-load.pm.log 00:39:52.749 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715845966_collect-cpu-temp.pm.log 00:39:52.749 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715845966_collect-bmc-pm.bmc.pm.log 00:39:53.690 09:52:47 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:39:53.690 09:52:47 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:39:53.690 09:52:47 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:53.690 09:52:47 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:39:53.690 09:52:47 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:39:53.690 09:52:47 -- spdk/autopackage.sh@19 -- $ timing_finish 00:39:53.690 09:52:47 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:53.690 09:52:47 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:39:53.690 09:52:47 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:53.690 09:52:47 -- spdk/autopackage.sh@20 -- $ exit 0 00:39:53.690 09:52:47 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:39:53.690 09:52:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:39:53.690 09:52:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:39:53.690 09:52:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:53.691 09:52:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:39:53.691 09:52:47 -- pm/common@44 -- $ pid=598949 00:39:53.691 09:52:47 -- pm/common@50 -- $ kill -TERM 598949 00:39:53.691 09:52:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:53.691 09:52:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:39:53.691 09:52:47 -- pm/common@44 -- $ pid=598950 00:39:53.691 09:52:47 -- pm/common@50 -- $ kill -TERM 598950 00:39:53.691 09:52:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:53.691 09:52:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:39:53.691 09:52:47 -- pm/common@44 -- $ pid=598952 00:39:53.691 09:52:47 -- pm/common@50 -- $ kill -TERM 598952 00:39:53.691 09:52:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:53.691 09:52:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:39:53.691 09:52:47 -- pm/common@44 -- $ pid=598979 00:39:53.691 09:52:47 -- pm/common@50 -- $ sudo -E kill -TERM 598979 00:39:53.952 + [[ -n 5512 ]] 00:39:53.952 + sudo kill 5512 00:39:53.964 [Pipeline] } 00:39:53.982 [Pipeline] // stage 00:39:53.988 [Pipeline] } 00:39:54.007 [Pipeline] // timeout 00:39:54.012 [Pipeline] } 00:39:54.031 [Pipeline] // catchError 00:39:54.036 [Pipeline] } 00:39:54.054 [Pipeline] // wrap 00:39:54.060 [Pipeline] } 00:39:54.077 [Pipeline] // catchError 00:39:54.087 [Pipeline] stage 00:39:54.090 [Pipeline] { (Epilogue) 00:39:54.105 [Pipeline] catchError 00:39:54.107 [Pipeline] { 00:39:54.125 [Pipeline] echo 00:39:54.127 Cleanup processes 00:39:54.134 [Pipeline] sh 00:39:54.433 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:54.433 599057 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:39:54.433 599496 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:54.449 [Pipeline] sh 00:39:54.739 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:54.739 ++ grep -v 'sudo pgrep' 00:39:54.739 ++ awk '{print $1}' 00:39:54.739 + sudo kill -9 599057 00:39:54.752 [Pipeline] sh 00:39:55.042 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:07.279 [Pipeline] sh 00:40:07.568 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:07.568 Artifacts sizes are good 00:40:07.583 [Pipeline] archiveArtifacts 00:40:07.590 Archiving artifacts 00:40:08.225 [Pipeline] sh 00:40:08.519 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:08.536 [Pipeline] cleanWs 00:40:08.547 [WS-CLEANUP] Deleting project workspace... 00:40:08.547 [WS-CLEANUP] Deferred wipeout is used... 00:40:08.554 [WS-CLEANUP] done 00:40:08.556 [Pipeline] } 00:40:08.579 [Pipeline] // catchError 00:40:08.593 [Pipeline] sh 00:40:08.882 + logger -p user.info -t JENKINS-CI 00:40:08.893 [Pipeline] } 00:40:08.911 [Pipeline] // stage 00:40:08.918 [Pipeline] } 00:40:08.937 [Pipeline] // node 00:40:08.943 [Pipeline] End of Pipeline 00:40:08.981 Finished: SUCCESS